text
stringlengths
1
2.28M
meta
dict
\section{Introduction} Simple and productive methods to investigate dynamical features of solids and molecules are offered by Time-dependent Density-functional Theory (TDDFT) \cite{GM12}. TDDFT embodies many concepts and formal exact results, but its core is the 1-1 correspondence \cite{RG84} between time-dependent (TD) external potentials and TD electronic densities, provided the initial state of the system is given. Every observable of the system is expressed as a TD density-functional. The calculation of the density is carried out by solving the TD Kohn-Sham (KS) equations \cite{P78}, which are single-particle Schr\"odinger equations that require an approximation to the exchange-correlation (XC) potential, a density-functional. The adiabatic local density approximation (ALDA) \cite{ZS80} to the TD XC potential is the simplest, useful approximation to study the dynamics of atoms and solids. However, when applied to molecules, ALDA often yields unphysical results, for example, atoms with fractional charges at dissociation, underestimated charge-transfer excitation energies, missing double excitations, among others. Two decades of research have shown that it is challenging to enhance the performance of ALDA while preserving computational simplicity and elegance. The TD KS equations describe all of the electrons as part of a single entity, imposing a limit on the number of atoms that can be simulated in a reasonable amount of time. This limit can be increased by dividing a molecule into fragments and performing inexpensive calculations on each individual fragment. Several approximated methods to investigate the electron dynamics of molecules as composed of fragments are available \cite{CW04,CFK07,N10,P13}. These methods typically assign a set of TD single-particle Schr\"odinger equations (not necessarily TD KS equations) to every fragment in the molecule. The fragment electrons are subject to the usual interactions as if they were in the presence of the fragment nuclei only; and, an extra potential, usually fragment-dependent, accounting for the interaction between the fragments, is added. Successful applications to the calculation of solvachromatic shifts \cite{N07,PJ12} and excitation energy of monomers \cite{CFK07} have been reported. A rigorous extension of TDDFT for molecules made of chemical fragments was presented in Ref. \cite{MJW13}. In this extension, a molecule is divided into fragments, each a set of atoms. Every fragment is assigned an initial state, and a Hamiltonian including a global, auxiliary potential, termed partition potential, which enforces that the total electronic density is the true TD electronic density of the molecule. We proved that the partition potential is uniquely determined by the TD electronic density of the system, and that it can thus be expressed (and approximated) as a density-functional. The linear response and an extension to consider electromagnetic fields are presented in Ref. \cite{MW14}. The Hamiltonians used in Refs. \cite{MJW13} and \cite{MW14}, and the aforementioned approximated methods, are particle-conserving, i.e., the average number of electrons in a fragment is time-independent; this restriction is eliminated in this work. In this paper we first introduce a density-inversion method that improves that of Ref. \cite{MJW13} and allows us to estimate the partition potential. Using a model for the errors, we discuss, in close connection with standard TDDFT, the origin of uncertainties of the partition potential in regions far from the molecule. We extend our fragment-based TDDFT \cite{MJW13} to allow for variable numbers of electrons in each fragment, while preserving the uniqueness of observables as density-functionals. The formalism introduced in this paper serves as a theoretical foundation for the development of methods accounting for electronic excitations and electron-transfer processes, without sacrificing explicit use of the electronic density and computational efficiency. \section{Fragment-based TDDFT} \subsection{Formulation} An electron in a fragment, labeled $\alpha$, is subject to a 1-body external potential, denoted as $v_{\alpha}$. For example, $v_{\alpha}({\bf r})=\sum_{i \in I_{\alpha}} -Z_i/|{\bf r}-{\bf R}_i|$; $I_{\alpha}$ is a set of labels for the nuclei in fragment $\alpha$. We assign to each fragment in the molecule a Hamiltonian that includes an auxiliary potential, the partition potential $v_{\mr{p}}$: \begin{equation} \hat{H}_{\alpha}[v_{\mr{p}}](t)=\hat{H}_{\alpha}^0+\int \mr{d}^3\mb{r}~ \hat{n}(\mb{r})v_{\mr{p}}(\mb{r}t)~, \end{equation} where $\hat{H}_{\alpha}^0=\hat{T}+\hat{W}+\int \mr{d}^3\mb{r}~\hat{n}(\mb{r})v_{\alpha}(\mb{r})$, $\hat{T}$ and $\hat{W}$ are the kinetic, and coulombic repulsion energy operators, respectively, and $\hat{n}({\bf r})$ is the density operator. This Hamiltonian does not include external driving forces other than those due to the nuclei of fragment $\alpha$. TD displacement of the positions of the nuclei can be described by introducing a time-dependent Hamiltonian where $v_{\alpha}$ is replaced by the corresponding TD fragment-potential, $\sum_{i\in I_{\alpha}} -Z_i/|{\bf r}-{\bf R}_i(t)|$. The state of a fragment is described by the evolution of the ket $|\psi_{\alpha}[v_{\mr{p}}](t)\rangle$ in Fock space, which satisfies the TD Schr\"odinger equation (atomic units are used throughout): \begin{equation}\label{firstevol} i\partial_t|\psi_{\alpha}[v_{\mr{p}}](t)\rangle=\hat{H}_{\alpha}[v_{\mr{p}}](t)|\psi_{\alpha}[v_{\mr{p}}](t)\rangle~, \end{equation} where \begin{equation} |\psi_{\alpha}(t)\rangle=\sum_M \nu_{\alpha,M}|\psi_{\alpha,M}(t)\rangle~. \end{equation} $\{\psi_{\alpha,M}\}$ are kets corresponding to states with integer number of particles and $\{\nu_{\alpha,M}\}$ are the weight amplitudes of those states. Kets with different number of electrons are orthogonal, $\langle \psi_{\alpha,M}|\psi_{\alpha,M'}\rangle=0~~,M\neq M'$. The total density is given as \begin{equation} n(\mb{r}t)=\sum_{\alpha} n_{\alpha}(\mb{r}t)~, \label{density_constraint} \end{equation} which defines $v_{\mr{p}}$ when $ n_{\alpha}(\mb{r}t)= \langle\psi_{\alpha}(t)|\hat{n}(\mb{r})|\psi_{\alpha}(t)\rangle~, $ as proven in Ref. \cite{MJW13}: Given a set $\{\psi_{\alpha,0},\,v_{\alpha}\}$, two potentials $v_{\mr{p}}$ and $v_{\mr{p}}'$ that differ by more than a TD constant {\slshape cannot give rise to the same density}. A corollary of this theorem is that there is a TD density-functional that, when evaluated at a given TD electronic density, gives the corresponding TD partition potential \footnote{This theorem and corollary were recently used in Ref. \cite{HLPC14} to propose an inversion method in the context of embedding potential-functional theory.}. The partition potential represents the TD electronic density of the supermolecule, and is decomposed as follows \cite{MW14}: \begin{equation} v_{\mr{p}}({\bf r} t)=v_{\mr{G}}({\bf r} t)+v_{\mr{d}}({\bf r} t)~. \end{equation} $v_{\mr{G}}$ is a ``gluing potential'', accounting for the correlation between the fragments, and $v_{\mr{d}}$ is the driving potential the molecule is subject to (e.g. laser field). The gluing potential yields the shape of the potential such that the TD electronic density is recovered. The gluing potential satisfies \cite{MW14}: \begin{equation} \begin{split}\label{gmotion} \frac{1}{\mr{i}}\nabla\cdot n({\bf r} t)&\nabla v_{\mr{G}}({\bf r} t)=\langle\psi(t)|[\hat{H}^0,\nabla\cdot\hat{\mb{j}}({\bf r})]| \psi(t)\rangle\\ &-\sum_{\alpha}\langle\psi_{\alpha}(t)|[\hat{H}^0_{\alpha},\nabla\cdot\hat{\mb{j}}({\bf r})]|\psi_{\alpha}(t)\rangle~. \end{split} \end{equation} The terms on the right-hand-side of Eq.(\ref{gmotion}) are TD density-functionals. Approximating these terms and solving the resulting differential equation yields an estimate of the gluing potential. Another route to approximating $v_{\mr{G}}$ is by assuming that the system evolves adiabatically from its ground states, driven by a very slowly-varying field. In such case, the potential $v_{\mr{G}}$ is obtained from the adiabatic approximation in ground-state Partition DFT \cite{EBCW10,MW12}: \begin{equation} v_{\mr{G}}^{\mr{Ad}}[n(t)]=v_{\mr{p}}^{\mr{Ad}}[n(t)]-v^{\mr{HK}}[n(t)]~, \end{equation} where $v^{\mr{HK}}[n(t)]$ is the external perturbation the interacting electrons are subject to in their ground-state in order to yield the density $n(\mb{r}t)$ (the uniqueness of $v^{\mr{HK}}$ follows from the Hohenberg-Kohn theorem). The partition potential, $v_{\mr{p}}^{\mr{Ad}}[n(t)]$, is the Lagrange multiplier required to solve the minimization: \begin{equation} \min_{\{\psi_{\alpha}\}\rightarrow n(t)} \sum_{\alpha}\lket{\psi_{\alpha}}\hat{H}_{\alpha}^0\rket{\psi_{\alpha}}~. \end{equation} under the constraint of Eq.(\ref{density_constraint}). The Lagrange multiplier for this problem is unique, up to an arbitrary constant \cite{CW06}. The TD partition KS equations are: \begin{equation} \begin{split} \mr{i}\partial_t &\phi_{i,\alpha}({\bf r},t)=\Big(-\frac{1}{2}\nabla^2+v_{\mr{Hxc}}[n_{\alpha}]({\bf r},t) \\ & +v_{\alpha}({\bf r})+v_{\mr{G}}[n]({\bf r},t)+ v_{\mr{d}}({\bf r},t)\Big)\phi_{i,\alpha}({\bf r},t)~. \end{split} \end{equation} The density can be obtained by means of: $ n({\bf r},t)=\sum_{i\alpha} f_{i\alpha}|\phi_{i\alpha}({\bf r},t)|^2~, $ where $\{f_{i\alpha}\}$ are the (time-independent) occupation numbers, chosen from a proper ensemble \cite{MJW13}. \section{Classical Interpretation of the Partition Potential} Consider a system composed of a single massive particle, and a bath made of particles much smaller than the massive one. All the particles in the particle+bath system interact via a potential, $U_{\mr{int}}$, of the form $\sum_{i>j} u_{ij}(\mb{r}_i-\mb{r}_j)$. The evolution of the subsystem particle, labeled $\mr{S}$, is dictated by Eq. (\ref{firstevol}). There is no partitioning of the external potential because the particle and the bath are subject, implicitly, to a macroscopic, external, confining potential. The Hamiltonian of the subsystem particle is thus $\hat{H}_{\mr{S}}=\hat{T}+\int \mr{d}^3\mb{r}~\hat{n}(\mb{r})v_{\mr{p}}(\mb{r}t)$, and the Hamiltonian of the bath is $\hat{H}_{\mr{B}}=\hat{T}+\hat{W}+\int \mr{d}^3\mb{r}~\hat{n}(\mb{r})v_{\mr{p}}(\mb{r}t)$. The average position of the particle is $\bar{{\bf r}}_{\mr{S}}(t)=\int \mr{d}^3{\bf r}~{\bf r}~ n_{\mr{S}}(\mb{r},t)$, where $n_{\mr{S}}(\mb{r},t)=|\psi(\mb{r},t)|^2$. By the Ehrenfest theorem and correspondence principle we have \begin{equation} m_{\mr{S}}\frac{\mr{d}^2 \bar{{\bf r}}_{\mr{S}}}{\mr{d} t^2}=\mb{F}_{\mr{p},\mr{S}}(t)~, \end{equation} where $\mb{F}_{\mr{p},\mr{S}}(t)=-\int \mr{d}^3\mb{r}~n_{\mr{S}}({\bf r},t)\nabla v_{\mr{p}}(\mb{r},t)$, and $m_{\mr{S}}$ is the mass of the particle. Comparison with the equation of motion of the real system indicates that, in the classical limit, $(\nabla v_{\mr{p}})(\bar{r}_{\mr{S}}(t))=(\nabla_{\mb{r}_{\mr{S}}} U_{\mr{int}}) (\bar{r}_{\mr{S}}(t),\bar{\mb{r}}_{\mr{B}}(t))$, where $\bar{\mb{r}}_{\mr{B}}$ is the average coordinate of the bath. The partition force at the position of the particle is simply the force exerted on the particle by the bath. As the mass of the subsystem particle is increased, the density tends to a Dirac distribution. It follows from the above result and Eq. (\ref{gmotion}) that the shape of the partition potential for any point but that of the particles is indefinite. However, for given initial momenta and coordinates of the particles and bath (this condition replaces the requirement that the initial wavefunction be specified), the trajectory of the momenta of the total system is in a one-to-one correspondence with the trajectory of partition forces exerted on each particle. Furthermore, if the assumptions of Langevin dynamics are applicable, the partition force on the massive particle can be interpreted as $\mb{F}_{\mr{p},\mr{S}}(t)=-\gamma \mb{v}_{\mr{S}}(t)+\mb{F}_{\mr{ran}}(t)$, where $\mb{v}_{\mr{S}}(t)=\mr{d} \bar{{\bf r}}_{\mr{S}}/\mr{d} t$, $\gamma$ is the friction coefficient, and $\mb{F}_{\mr{ran}}$ is the random force. The gradient of $v_{\mr{p}}$ can {\slshape only} be known at the position of the particles, nowhere else. In the original proof by Runge and Gross \cite{RG84}, and its extension for partition potentials \cite{MJW13}, it was found that potentials whose gradients grow rapidly in regions distant from the molecule are not covered. This result, as illustrated in Section 4, is related to the uncertainty in the estimation of the partition potential near the boundaries of the simulation box. \section{Numerial TD Potentials} TDDFT, in which our formulation is built upon, concerns itself with the simplification of the problem: \begin{equation} (\mr{i}\partial_t -\hat{H}^{\lambda}[v](t))|\psi(t)\rangle=0,\quad \rket{\psi(0)}=\rket{\psi_0}~, \end{equation} where \begin{equation} \hat{H}^{\lambda}[v](t)=\hat{T}+\lambda \hat{W}+\int \mr{d}^3 \mb{r}~ \hat{n}(\mb{r})v(\mb{r}t)~. \end{equation} Runge and Gross \cite{RG84} showed that if $v$ is Taylor-expandable around $t=0$ and does not have physical anomalies in the boundaries, then $v$ determines $n$ uniquely up to a TD constant in the potential; this theorem (recently used to formulate quantum control in TDDFT \cite{CWG12}) can be extended to include non-analytic potentials \cite{RvL11}. Let us denote the RG map as $\Lambda_{\psi_0}^{\lambda}$; thus, $n(t)=\Lambda_{\psi_0}^{\lambda}[v](t)$. The operator $\hat{W}$ can stand for different types of electron-electron interactions, such as screened coulombic repulsion. If $\lambda=0$, then the electrons are non-interacting. Suppose a well behaved density, $n^{\mr{ref}}$, and an initial state, $\psi_0$, are known. If $v_1$ and $v_0$ exist, where $v_{\lambda}(t)=(\Lambda_{\psi_0}^{\lambda})^{-1}[n^{\mr{ref}}](t)$, then the Hartree-exchange-correlation potential for the system, by definition, is $v_{\mr{HXC}}=v_0-v_1$. Using the exact map $\Lambda_{\psi_0}$ would require solving the TD Schr\"odinger equation, which is what one wants to avoid in practical calculations. For the development of functionals, exploration of the map $\Lambda_{\psi_0}^{\lambda}$ is fruitful; this map could be investigated by solving the problem $n^{\mr{ref}}(t)-\Lambda_{\psi_0}^{\lambda}[v](t)=0$, which is a root-finding problem. The first order response of the density for some perturbation $\delta v$ is $\delta n(\mb{r}t)=\int \mr{d}^3\mb{r}\mr{d} t'~\chi^{-1}(\mb{r}t,\mb{r}'t') \delta v(\mb{r}'t')$. The response function $\chi^{-1}$ should decay in the asymptotic regions because large perturbations of $v$ in those regions have a small response in $n$. This relation between densities and potentials introduces instabilities in root-finding algorithms aimed at reproducing the density corresponding to a given external potential. In the ground-state case, the instabilities could be eliminated by enforcing satisfaction of eigenvalue constraints. For three dimensional applications, in general, capturing the asymptotic regions is difficult when Gaussian basis sets are used because they do not have the correct asymptotic behavior. Instead of looking for the exact root, one can solve a minimization problem: \begin{equation} \min_{v\in \mc{V}}\int_0^T \lVert n^{\mr{ref}}(s)-\Lambda^{\lambda}_{\psi_0}[v](s)\rVert^2~ \mr{d} s~, \end{equation} where $\lVert f \rVert$ is a suitable spatial norm, and $\mc{V}$ is a space of physical potentials. The quantities $n^{\mr{ref}}(t)$ and $\langle \psi[v](t)|\hat{n}(\mb{r})|\psi[v](t)\rangle$ need to be approximated. Let us write $n^{\mr{ref}}(t)-\Lambda_{\psi_0}^{\lambda}[v](t)=\tilde{n}^{\mr{ref}}(t)-\tilde{\Lambda}_{\psi_0}^{\lambda}[v](t)+ \epsilon[n^{\mr{ref}},v]$. $\tilde{n}^{\mr{ref}}(t)$ is the approximation to $n^{\mr{ref}}(t)$ and $\tilde{\Lambda}_{\psi_0}^{\lambda}[v]$ is the approximation to $\Lambda_{\psi_0}^{\lambda}$. If $v^*$ is the exact potential representing $n^{\mr{ref}}$, then the problem becomes $\tilde{n}^{\mr{ref}}= \tilde{\Lambda}_{\psi_0}^{\lambda}[v^*]+\epsilon$. Because we cannot use exact methods to determine $n^{\mr{ref}}$ and $\Lambda_{\psi_0}^{\lambda}$, we assume that $\epsilon$ is a random function of the space-time coordinates. Moreover, one would expect that $\tilde{n}^{\mr{ref}}$ and $\Lambda_{\psi_0}^{\lambda}$ have smooth timespace gradients, and that $\epsilon$ displays autocorrelation because the spacing between points is arbitrarily small. \subsection{Estimation of the Partition Potential} Let $\mc{V}_{\mr{p}}$ be a space of TD partition potentials, and $\mc{D}$ a space of TD densities and define the map: \begin{equation} \Lambda_{S_0}:\mc{V}_{\mr{p}}\rightarrow\mc{D}~, \end{equation} where $S_0=\{\psi_{\alpha,0},v_{\alpha}\}$. For a given TD partition potential, the density is obtained by evaluation of the above map at the given partition potential; in other words, $n(t)=\Lambda_{S_0}[v_{\mr{p}}](t)$. This map depends on the history of the partition potential, i.e., it has memory dependence \cite{MJW13}. Let $v_{\mr{p}}^*$ be the true partition potential. We assume that, due to numerical errors, the estimation to the reference density $\tilde{n}^{\mr{ref}}$ is of the form $\tilde{n}^{\mr{ref}}=\tilde{\Lambda}_{S_0}+\epsilon$, where $\epsilon$ is a {\slshape random function}. To estimate the partition potential corresponding to $\tilde{n}^{\mr{ref}}$ we minimize: \begin{equation} \lVert e[v_{\mr{p}}]\rVert^2_{\mu}=\lVert \tilde{n}^{\mr{ref}}-\tilde{\Lambda}_{S_0}[v_{\mr{p}}]\rVert^2_{\mu}~, \end{equation} where $\mr{d}\mu(\mb{r},t)$ is the measure. Since $\epsilon$ is a function, its probability density function (PDF) is a {\slshape functional}. The PDF depends on parameters, denoted collectively as $\Theta$, and the PDF itself as $D([\epsilon]|\Theta)$. The probability that $\epsilon$ is observed in a set $\mc{U}$ is given by the path integral: \begin{equation} P(\epsilon\in \mc{U}|\Theta)=\int_{\mc{U}}\mr{d} m_{\mr{L}}[\epsilon]~ D([\epsilon]|\Theta)~, \end{equation} where the measure over the space of errors is $m_{\mr{L}}$. The traditional methods of non-linear regression can be applied to estimate the best parameters of the distribution, $\Theta^*$, for a given set of observations. Then a Taylor expansion in terms of the parameters can be used to generate their PDF, which can then be used to estimate the error in the parameters. In this case, the parameters are the variance and the partition potential. In the next section, we will expand the partition potential in a spline basis set. The parameters are the values of the partition potential at the knots, and they are correlated: A perturbation of the partition potential at one knot affects the response of the density in other knots. Hence, we must employ a model of correlated errors. Finding the correct model is a demanding task beyond the scope of this work. For this reason, we choose a biased model based on the following observations: i) A measure of the error of the form $ \int \mr{d}^3{\bf r}\mr{d} t~(\tilde{n}^{\mr{ref}}({\bf r},t)-\tilde{\Lambda}_{S_0}[v_{\mr{p}}])^2 $ suffers of autocorrelation. ii) Far from the molecule, the partition potential has little influence on the density. iii) Estimating the density is not sufficient; its spatio-temporal gradient is an important quantity. An error measure accounting for these observations is: \begin{equation}\label{err_fun} \lVert e[v_{\mr{p}}]\rVert^2_{\mu}= \int \mr{d}\mu({\bf r},t)\{ |\nabla e({\bf r},t)|^2+(\partial_t e({\bf r},t))^2\}~. \end{equation} Based on ii), we choose a measure of the form $\mr{d}\mu({\bf r},t)=\mr{d}^3{\bf r}\mr{d} t~\sum_i \tilde{n}^{\mr{ref}}({\bf r}_i,t)\delta({\bf r}-{\bf r}_i)$. Where $\{{\bf r}_i\}$ are points selected in such a way that $|\nabla e|^2+(\partial_t e)^2$ resembles a $\chi^2$-distribution. To apply this measure of error in the next section, we transform the above measure into a vector norm. Then, the resultant distribution is expanded in terms of the partition potential evaluated at the knots, and asymptotic analysis is applied \cite{SW89}, leading to the random variables required to reproduce the density within a small error tolerance. \subsection{1d Electron in a Double-well Potential}\label{texample} Let us revisit the example considered in Ref. \cite{MJW13}: A one dimensional ``electron" in a double well potential. The TD partition equations are: \begin{equation} \mr{i}\partial_t \phi_{\alpha}(x,t)=\Big(-\frac{1}{2}\partial^2_x+v_{\alpha}(x)+v_{\mr{p}}(x,t)\Big)\phi_{\alpha}(x,t)~, \end{equation} where $\alpha=\mr{L},~\mr{R}$, standing for left and right wells. The potentials are $v_{\alpha}(x)=v_0/\sqrt{(x-x_{\alpha})^2+a}$; the parameters are: $v_0=-1$, $x_{\mr{R}}-x_{\mr{L}}=4$, and $a=1$. The density is obtained by averaging over the orbital densities of both wells: \begin{equation} n(x,t)=\frac{1}{2}|\phi_{\mr{L}}(x,t)|^2+\frac{1}{2}|\phi_{\mr{R}}(x,t)|^2~. \end{equation} Suppose that the supermolecule evolves from the ground state driven by a monochromatic laser. The evolution of the system is thus dictated by the solution of: \begin{equation} \mr{i}\partial_t\psi(x,t)=\Big(-\frac{1}{2}\partial^2_x+v(x)+v_{\mr{d}}(x,t)\Big)\psi(x,t)~, \end{equation} where $v_{\mr{d}}(xt)=Ex\sin \omega t$, and the external potential is $v=v_{\mr{L}}+v_{\mr{R}}$. The density obtained from the above evolution equation is $n^{\mr{ref}}(xt)=|\psi(xt)|^2$, which is the target density we wish to represent. \begin{figure}[tbh!] \centering \subfigure[$v$ and $v_{\mr{p}}^0$]{ \includegraphics[scale=0.20]{ini_vp.eps}} \subfigure[Initial Densities]{ \includegraphics[scale=0.20]{fragden_t_0.eps}} \\ \subfigure[$v_{\mr{p}}(t=1.0)$]{ \includegraphics[scale=0.20]{vp_t_10.eps}} \subfigure[$n(t=1.0)$]{ \includegraphics[scale=0.20]{fragden_t_10.eps}}\\ \subfigure[$v_{\mr{p}}(t=8.0)$]{ \includegraphics[scale=0.20]{vp_t_80.eps}} \subfigure[$n(t=8.0)$]{ \includegraphics[scale=0.20]{fragden_t_80.eps}} \caption{Snapshots of the partition potential. In a), solid line: Initial partition potential, dashed line: Left fragment external potential, dashed-dotted line: Right fragment external potential. In b), d), and f), solid lines: Left electronic density, dashed lines: Right electronic density. In d) and f) the dashed-dotted line is the total density. \end{figure} The laser parameters are $\omega=0.3$, $E_0=0.05$. We propagate the states of the system using the Crank-Nicholson method; time step is 0.1, box length is 20, spatial step is 0.2, and total propagation time is 10 units. The partition potential is represented in a smoothed, cubic, spline basis set with 22 knots equally spaced in the box. The initial partition potential is estimated by minimizing the error using sequential quadratic programming (other useful methods employ the density error directly \cite{RABB15}). To obtain the initial densities, the error functional of Eq. (\ref{err_fun}) (with the time-dependency dropped) is minimized. First the problem $(-1/2\partial_x^2 +v_{\alpha}+v_{\mr{p}}^0)\phi_{n,\alpha}=\epsilon_{n,\alpha}\phi_{n,\alpha}$ is solved (with the finite-difference method) for both wells with some estimation of $v_{\mr{p}}^0$. Then, the density is compared with that of the system of reference in order to propose the next estimation in the iterative procedure of sequential quadratic programming; the constraints are conservation of charge and chemical potential (HOMO) equalization \cite{MW14}. Figure 1.a. shows the initial partition potential and external potentials of each well. The initial fragment densities that add up to the ground-state density of the supermolecule are displayed in Figure 1.b. The estimation of the evolution of $v_{\mr{p}}$ is determined by using the step-by-step scheme proposed in Ref. \cite{MJW13}, and the norm of Eq. (\ref{err_fun}): For example, for $\Delta t=0.1$, the error norm can be approximated as \begin{equation} \Delta t \sum_i \{\tilde{n}^{\mr{ref}}(x_i,\Delta t)[(\partial_x e(x_i,\Delta t))^2+(\partial_t e(x_i,\Delta t))^2]\}~. \end{equation} The numerical value of the above quantity depends on the value of $v_p$ at the spline knots and at $t=\Delta t/2$. Hence, this quantity is varied until the above function is minimized and the total density of the fragments match that of the true system. The procedure is repeated similarly for the rest of the propagation. Figure 1.c. shows the partition potential at $t=1.0$; it is localized in the intermediate region between the fragments. The fragment electronic densities (Figure 1.d) are also well localized at $t=1.0$. Because in absence of the partition potential the fragment-densities would just be localized around their wells, the partition potential must be such that it induces the transfer of charge from the right fragment into the left fragment (Figure 1.e). However, as we note in Figure 1.f, the charge transfer in this case is represented by the spreading of the right electronic density into the left one, not by a change in the fragment populations. Two observations: i) If one were to assign a grid that is fine enough around the center of the wells and then coarser as one moves away from the wells, then to describe the density spreading, the grid would need to be time-dependent. ii) The partition potential must induce the charge transfer and act like a ``spoon''. \begin{figure}[tbh!] \centering \includegraphics[scale=0.35]{errors.eps} \caption{Error-bar plot of the partition potential at $t=6.2$. Solid line: Interpolated optimal potential, circles: Interpolation knots.} \end{figure} The result of the error estimation in the partition potential at $t=6.2$ is shown in Figure 2. As expected, in the boundary regions, the error is quite significant, and the error bars extend well beyond the plotting range. This implies that the shape of the potential in these regions is not reliable. All space-time points obeying causality are coupled. For example, variation of a single knot in the boundary affects its neighbors, introducing large gradient fluctuations. Thus, the error can spread to regions where the density is non-negligible. This can cause instabilities in the minimization procedure if a norm such as $\int \mr{d}^3\mb{r}\mr{d} t~ (\tilde{n}^{\mr{ref}}(\mb{r},t)-n(\mb{r},t))^2$ is employed. For this reason we recommend the use of derivatives of the density to measure the error. \section{Variable Occupation Numbers} Let us assign variable electron-occupation numbers to the fragments. First, divide the total propagation time into blocks:$[0,\tau)\cup [\tau,2\tau)\cup\ldots\cup [(m-1)\tau,m\tau)$, where $m\tau$ is the final time of the propagation, and let \begin{equation} X_{\alpha}=\{|\xi_{\alpha}^0\rangle,|\xi_{\alpha}^1\rangle,\ldots,|\xi_{\alpha}^{m-1}\rangle\}~, \end{equation} be a set of instantaneous kets, in Fock space, for fragment $\alpha$. At a single time $t=k\tau$, the following minimization is performed to obtain the set of kets describing the density of the fragmented molecule: \begin{equation}\label{pdftmin} \begin{split} \{|\xi_{\alpha}^k\rangle\}_{\alpha=1}^{N_{\mr{frag}}}= \arg &\min\Big\{\sum_{\alpha}\langle\psi_{\alpha}^k| \hat{H}_{\alpha}^0|\psi_{\alpha}^k\rangle ~~\mr{s.t.}\\& \sum_{\alpha}\langle \psi_{\alpha}^k|\hat{n}(\mb{r})| \psi_{\alpha}^k\rangle= n(\mb{r},k\tau),~\forall \mb{r}\Big\}~, \end{split} \end{equation} The occupation numbers of fragment $\alpha$ are formally expressed as $ |\nu_{\alpha,M}(k\tau)|^2=|\langle \xi^k_{\alpha,M}|\xi^k_{\alpha}\rangle|^2~.$ Here, $|\xi^k_{\alpha,M}\rangle$ is an optimal ket (obtained from solving the right hand side of Eq. (\ref{pdftmin})) for fragment $\alpha$ with $M$ electrons. These numbers are density-functionals. The evolution operator of fragment $\alpha$ is: $ \hat{U}_{\alpha}[v_{\mr{p}}](t_1,t_2)=\mc{T}\exp(-\mr{i}\int_{t_0}^{t_1}\mr{d} s~\hat{H}_{\alpha}[v_{\mr{p}}](s))~. $ Introduce the displaced set of kets: \begin{equation} \begin{split} \tilde{X}_{\alpha}&=\{\hat{U}_{\alpha}(\tau,0)|\xi_{\alpha}^0\rangle,\\&\hat{U}_{\alpha}(2\tau,\tau)|\xi_{\alpha}^1\rangle,\ldots \hat{U}_{\alpha}(m\tau,(m-1)\tau)|\xi_{\alpha}^{m-1}\rangle\}~. \end{split} \end{equation} Now let us define the following dyadic product: $(X_{\alpha}\tilde{X}^{\dagger}_{\alpha})(k)=|\xi_{\alpha}^k\rangle\langle \tilde{\xi}_{\alpha}^{k-1}|$. The symbol $X_{\alpha}\tilde{X}_{\alpha}^{\dagger}$ is the set of dyadic products where the $k$-th component is the dyadic product between the ket at the beginning of the $k$-th block and the displaced ket from the $k-1$-th block. Now, let $\hat{B}_{\alpha}$ be the TD operator: \begin{equation} \begin{split} \hat{B}_{\alpha}(t)&=(W_{\tau}*\ln X_{\alpha}\tilde{X}^{\dagger}_{\alpha})(t)\\ &=\sum_{k=1}^{m}\delta(t-k\tau)\ln |\xi_{\alpha}^k\rangle\langle \tilde{\xi}_{\alpha}^{k-1}|~. \end{split} \end{equation} where $W_{\tau}$ is the Dirac-Comb kernel. Addition of the operator $\hat{B}_{\alpha}$ to the Hamiltonian $\hat{H}_{\alpha}(t)$ yields the non-Hermitian operator: \begin{equation} \hat{H}_{\mr{c},\alpha}[v_{\mr{p}}](t)=\hat{H}_{\alpha}[v_{\mr{p}}](t)+\mr{i}\hat{B}_{\alpha}[v_{\mr{p}}](t)~. \end{equation} The evolution of the system is now determined by $|\psi_{\alpha}[v_{\mr{p}}]\rangle$, which obeys \begin{equation} i\partial_t|\psi_{\alpha}[v_{\mr{p}}](t)\rangle=\hat{H}_{\mr{c},\alpha}[v_{\mr{p}}](t)|\psi_{\alpha}[v_{\mr{p}}](t)\rangle~. \end{equation} The total density is $n(\mb{r}t)=\sum_{\alpha}\langle \psi_{\alpha}(t)|\hat{n}(\mb{r})|\psi_{\alpha}(t)\rangle$ and the number of particles in fragment $\alpha$ is $N_{\alpha}(t)=\langle \psi_{\alpha}(t)|\hat{N}|\psi_{\alpha}(t)\rangle$. In general, any observable, $\hat{O}(t)$, is expressed as a functional of the partition potential, $\langle \psi_{\alpha}[v_{\mr{p}}](t)|\hat{O}(t)|\psi_{\alpha}[v_{\mr{p}}]\rangle$. Given the partition potential and occupation numbers as density-functionals, the scheme to determine the evolution of the molecule is the following: First, propagate the kets $\{\rket{\psi_{\alpha}}\}$ in the interval $[0,\tau)$ with fixed populations on each fragment. Then, at $t=\tau$, obtain new occupation numbers according to Eq. (\ref{pdftmin}) as well as new states to propagate; continue the propagation in the block $[\tau,2\tau)$. The procedure continues similarly for the rest of the propagation. The density of the system is then obtained as $n({\bf r},t)=\sum_{\alpha}\lket{\psi_{\alpha}(t)}\hat{n}({\bf r})\rket{\psi_{\alpha}(t)}$. The theorem discussed in section 2 also applies in this case. Therefore, the partition potential for this scheme is uniquely determined by the TD electronic density, up to an arbitrary constant. \begin{figure}[htb!] \centering \subfigure[$N_L$]{ \includegraphics[scale=0.25]{surf_occ.eps}}\\ \subfigure[$v_{\mr{p}}(t=40)$]{ \includegraphics[scale=0.20]{vp_t_40.eps}} \subfigure[$n(t=40)$]{ \includegraphics[scale=0.20]{fragden_t_40.eps}} \subfigure[$v_{\mr{p}}(t=70)$]{ \includegraphics[scale=0.20]{vp_t_70.eps}} \subfigure[$n(t=70)$]{ \includegraphics[scale=0.20]{fragden_t_70.eps}} \caption{Evolution of the fragments with TD electron populations. In a) the solid line is the result from the inversion, the dashed line is the result from the two-state approximation, and the dashed-dotted line is obtained by omitting the gluing potential: $v_{\mr{G}}=0$. In c) and e), solid line: $n_{\mr{L}}$, dashed line: $n_{\mr{R}}$, dashed-dotted: $n$.} \end{figure} The partition potential is discontinuous at the relaxation nodes (points where $t$ is an integer multiple of $\tau$). Discontinuities in time can be eliminated by using an integral transformation that smooths the observable at the relaxation nodes. In practice, however, it is convenient to propagate the occupation numbers and gluing potential assuming that they are continuously differentiable functions of time. It can be shown, assuming that the dynamics of the occupation numbers is much slower than that of the partition potential, that the 1-1 map between the former and the density still holds. This follows from the scheme we have shown here because the electronic populations are fixed in the first block, allowing us to apply the Runge-Gross theorem in such block. A density-functional approximation to the occupation numbers is needed to apply the theory. The dynamics of the occupation numbers can be investigated using master equations, where the rate coefficients are determined by Fermi's golden rule, or transition elements that couple the fragments. Here, we illustrate a simple approach: A trial wave function to investigate the evolution of the occupation numbers is $\rket{\eta(t)}=a_{\mr{L}}(t)\rket{\varphi_{\mr{L}}} +a_{\mr{R}}(t)\rket{\varphi_{\mr{R}}}$, where $\rket{\varphi_{\alpha}}$ is the ground-state of the electron described only by $\hat{H}_{\alpha}^0$ (This hamiltonian in coordinate representation is $-1/2\partial^2_x+v_{\alpha}(x)$). The dynamics of electron transfer is governed by a two-component wave-function $\mb{a}=(a_{\mr{L}},a_{\mr{R}})^{\mr{T}}$. We assume that the Hamiltonian coupling that relates the two fragments is of the form: \begin{equation} \hat{\mc{H}}(t)=\hat{H}_{\mr{f}}^0+\int \mr{d} x~(v_{\mr{G}}(x,0)+v_{\mr{d}}(x,t))\hat{n}(x) \end{equation} where $\hat{H}_{\mr{f}}^0=\hat{H}_{\mr{L}}^0\oplus\hat{H}_{\mr{R}}^0$, is the uncoupled Hamiltonian; $\hat{H}_{\alpha}^0\rket{\varphi_{\beta}}=0$ if $\alpha\neq\beta$. For the sake of the illustration, the gluing field is frozen, serving as a ``bridge'' for the charge to be transferred from one well into the other. From the evolution equation: $\mr{i} \partial_t\rket{\eta(t)}=\hat{\mc{H}}(t)\rket{\eta(t)}$ we infer that the state vector, $\mb{a}$, satisfies: \begin{equation}\mr{i}\partial_t \mb{a}(t)=\mb{S}^{-1}(\bm{\epsilon}_0+\bm{\Delta}(t))\mb{a}(t) \end{equation} where $S_{\alpha\beta}=\int \mr{d} x~\varphi_{\alpha}^*(x)\varphi_{\beta}(x)$, $\bm{\epsilon}_0=\mr{diag}(\epsilon_0,\epsilon_0)$, and \begin{equation} \Delta_{\alpha\beta}(t)=\int \mr{d} x~ \varphi_{\alpha}^*(x)(v_{\mr{G}}(x,0)+v_{\mr{d}}(x,t))\varphi_{\beta}(x)~.\end{equation} The occupation numbers are obtained from the ``density'' of $\mb{a}$: $N_{\alpha}(t)=|\mb{a}_{\alpha}|^2(t)+\mr{Re}(a^*_{\mr{L}}(t)a_{\mr{R}}(t)S_{\mr{LR}})$. The last term arises from the overlap of the functions $\varphi_{\mr{L}}$ and $\varphi_{\mr{R}}$, guaranteeing that $N_{\mr{L}}+N_{\mr{R}}=1$. The example of the previous section, summarized in Fig. 1., illustrates how the partition potential can be estimated even under a strong laser field. We now return to the example of section \ref{texample} and allow for variable occupations. The parameters for the propagation now are $\tau=2$, $\Delta t=1$, $\omega=0.3$, $E_0=0.02$. Comparison of the exact time-dependency of the average number of electrons of the left fragment with the approximation described above is shown in Figure 3.a. The initial gluing potential used here is that shown in Fig. 1.a. The two-state approximation works well at short times, and displays deviations after $t=20$. Capturing the results of the two-state approximation would be quite challenging by fixing the occupation numbers and finding the corresponding partition potential. Improvements over the two-state approximation can proceed by either refining the gluing potential (going beyond the frozen approximation) or increasing the number of states considered to couple the fragments. The first alternative has the advantage that the equations can be solved very fast. Nonetheless, for functional development, the gluing potential is also a determinant factor for the evolution of the shape of the electronic fragment-density ($\int n_{\alpha}/N_{\alpha}$). Figure 3.b shows a snapshot of the ``exact'' partition potential at $t=40$. In contrast with the results of section 3.2, the partition potential is now well localized (Figures 3.d and 3.f). This suggests that the standard methods of ground-state PDFT can be used to estimate the partition potential through the use of the adiabatic approximation. The fragment densities also remain localized (Figure 3.c, 3.e, and 3.g). Qualitatively, the partition potential accounts for the shape of the electronic densities of the fragments, while the occupation numbers are responsible for their height. \section{Concluding Remarks} We formulated a TDDFT for treating molecules as composed of smaller composite units. The successful application of this formulation requires approximations to the partition potential and the occupation numbers. This can be accomplished by a proper approximation to the Hamiltonians $\{\hat{H}_{{\mr{c}},{\alpha}}(t)\}$, or the auxiliary evolution equations of the electron populations in the fragments. The error analysis was also presented. It leads to a simple form of estimating the errors in the potentials. The partition potential, obtained by numerical inversion, can be uncertain in spatio-temporal regions where the density is small. However, as time increases, the error can propagate from the boundary areas into regions were the density is high. \section{Acknowledgments} We acknowledge support from the National Science Foundation CAREER program under grant No.CHE-1149968. AW also acknowledges support from the Alfred P. Sloan Foundation and the Camille Dreyfus Teacher-Scholar Awards Programs.
{ "timestamp": "2015-04-14T02:10:54", "yymm": "1504", "arxiv_id": "1504.03059", "language": "en", "url": "https://arxiv.org/abs/1504.03059" }
\section{Introduction} \subsection{Background} Hard X-ray Modulation Telescope (HXMT) is a planned Chinese space X-ray telescope. The telescope will perform an all-sky survey in hard X-ray band ($1$ -- $250\;\mathrm{keV}$), a series of deep imaging observations at small sky regions and pointed observations. We expect a large number of X-ray sources, e.g., AGNs, to be detected in its all-sky survey. We also expect through a series of deep imaging observations at the Galactic plane X-ray transients can be detected\citep{li2007,lu2012}. Therefore the point source detection performance is one of our concerns on HXMT data analysis. Methods and corresponding sensitivities of pointed observation have been discussed by \citet{jin2010}. In imaging observations such as all-sky survey and deep imaging at small sky regions, a variety of data analysis aspects and methods are involved. \textbf{First, images are computed instead of recorded directly by the optical instrument.} Mapping as well as image reconstruction methods are useful. The direct scientific data products from imaging observations of HXMT are still scientific events, more specifically, X-ray photon arrival events. The attitude control system (ACS) of the spacecraft reports the attitudes periodically. We take these reported attitudes as nodes to perform certain interpolations to determine the instantaneous attitude for each scientific event. In this way, a set of parameters are assigned to each event, including the coordinates on the celestial sphere where the telescope is pointing at. Hence we call this process \textit{events mapping}, where scientific events are mapped from time domain to the celestial sphere. The product of this process is refered to as the observed image, which implies the dimensionality of the data. \textbf{Second, the exposure to a specific source is limited more strictly} thus the signal-to-noise ratio (SNR) is tightly restricted. Hence the importance of regularization methods becomes prominent. \textbf{Finally, \textit{a picture is worth a thousand words}}. Various information can be extracted from an image by numerical methods. In this paper we investigate the point source detection performance of the imaging and detecting system synthesised from the telescope as well as diverse combinations of data analysis methods, especially the regularization methods. \subsection{Modulation in HXMT imaging observation} The PSF of HXMT HE, which is a composite telescope consisting of 18 collimated detectors, describes the response of the telescope to a point source when the telescope is pointing at the source as well as different locations around it. In other word, the PSF is a density function of the distribution of responses occur in different observation states, which is denoted by the instantaneous attitude of the telescope as a spacecraft. So the PSF takes the attitude of the telescope as its input. We use the proper Euler's angles to describe the attitude of the telescope, i.e., $\phi$ and $\theta$ the longitude and latitude of the pointing, as well as $\psi$ denoting the rotation angle of the telescope around its own pointing axis, namely, the position angle. The modulation equation that corresponds to the imaging observation over the sphere surface is \begin{equation} d(\phi,\theta,\psi) = \iint\limits_{\Omega} p(\phi,\theta,\psi,\phi',\theta') f(\phi',\theta')\cos\theta'\mathrm{d}\phi'\mathrm{d}\theta' \text{,} \end{equation} where $f(\phi',\theta')$ is the object function (i.e., the image) defined in a compact subset of the sphere surface $\Omega$, $p(\phi,\theta,\psi,\phi',\theta')$ is the modulation kernel function that relates the value of object function $f(\phi',\theta')$ defined on a neighbourhood of the point $(\phi',\theta')$ of the subset $\Omega$ to the instantaneous response of the telescope, while its status is $(\phi,\theta,\psi)$. The modulation kernel function is determined by the point spread function (PSF) of the telescope. Suppose a unit point source is located at the zenith of the celestial sphere, i.e., the point $(0,0,1)$ in the corresponding Cartesian coordinate system. Fix the position angle $\psi$, while slew the telescope across the polar cap, and assign responses of the telescope to the unit source to pixels on the celestial sphere. Then we obtain a map $P(\phi,\theta)$ represents the PSF on the celestial sphere with fixed rotation angle $\psi$. The map is then projected to a tangent plane of the celestial sphere $z=1$ by gnomonic mapping, i.e., \begin{equation} \left\{ \begin{aligned} u &= \cot\theta\cos\phi \\ v &= \cot\theta\sin\phi \end{aligned}\right. \label{eq-gnomonic}\text{,} \end{equation} where $u$ and $v$ are local Cartesian coordinates on the tangent plane. Now we have \begin{equation} P_{\tan}(u,v) = P_{\tan}(\cot\theta\cos\phi,\cot\theta\sin\phi) = P(\phi,\theta)\text{,} \end{equation} i.e., the PSF defined on the tangent plane. To provide for data analysis we have two discrete forms of $P_{\tan}$, \begin{equation} P_{i,j} = \frac{\iint_{\alpha_{i,j}} P_{\tan}(u,v)\mathrm{d}u\mathrm{d}v} {\iint_{\alpha_{i,j}} \mathrm{d}u\mathrm{d}v}\text{,} \end{equation} where $\alpha_{i,j}$ is the neighbourhood of the pixel $(i,j)$ of the 2-D pixel grid, and the normalized form \begin{equation} H_{i,j} = \frac{\iint_{\alpha_{i,j}} P_{\tan}(u,v)\mathrm{d}u\mathrm{d}v} {\iint P_{\tan}(u,v)\mathrm{d}u\mathrm{d}v}\text{.} \label{eq-pixelization} \end{equation} Given the discrete image $\vect{F}$, the detection area $A$ and the duration of exposure on each pixel $\tau$, the observed data is \begin{equation} \vect{D} = \tau \cdot \left[\frac{A}{\max\limits_{i,j}H_{i,j}} \left(\vect{F} \ast \vect{H}\right) + r_b\right] \text{,} \label{eq-modulation} \end{equation} where $r_b$ is the constant background count rate, $\ast$ denotes the convolution, and $\vect{H}$ is the normalized PSF on the tangent plane. Eq. \ref{eq-modulation} approximates modulations in HXMT imaging observations. Distortion occurs when projecting a set of points from (or to) a sphere surface to (or from) a plane. For example, distance between any two points, area of any continuous subset, angle between any two crossing lines (or tangents of curves) are altered non-uniformly. On the other hand, the rotation angle $\psi$ is not always fixed during HXMT imaging observations. Both distortions in image reconstruction from HXMT observed data and rotations during imaging observations have been discussed by \citet{huo2013raa}. Here for the sake of simplicity, we ignore them in this article. \section{Numerical methods} \label{sect-methods} \subsection{Single point source detection performance estimation} \label{sect-estimation} We estimate the single point source performance in terms of sensitivity and position accuracy through the following procedures. \begin{enumerate} \item Determine flux threshold for point source detection. \label{item-threshold} \begin{enumerate} \item Simulate a frame of observed data contains only background counts. \label{item-simulate-data} \item Run denoise program on the simulated data to try to increase the signal-to-noise ratio. \label{item-denoise} \item Demodulate the denoised data. \label{item-demodulate} \item Run SExtractor, a source detection program, by \citet{bertin1996sextractor} on the demodulated image to detect point sources and extract their intensities, coordinates and other parameters. At this point a catalog of point sources is compiled from the simulated data. Point sources detected here, i.e., from images demodulated from background data are false detections. \label{item-extract-source} \item Repeat previous steps (from \ref{item-simulate-data} to \ref{item-extract-source}) and a series of catalogs are compiled. Draw a histogram of flux of false detections that could possibly been detected from background counts given a specific condition of both observation and detection. \item Choose a cut from the histogram as the flux threshold so that a certain percent of the false detections are rejected and the rejection percentage is precise enough. The rejection percentage, e.g., $95\%$ or $99.7\%$ etc., reflects the significance of detections above the corresponding threshold. \end{enumerate} \item Estimate detection efficiency and position accuracy of a point source of specific flux intensity. \label{item-efficiency} \begin{enumerate} \item Simulate observed data contains a single point source of given flux intensity $f_m$ located at $(x_m, y_m)$ in the model image. \label{item-simulate-data-pts} \item Perform steps from \ref{item-simulate-data} to \ref{item-extract-source}. A catalog is compiled. \label{item-extract-source-pts} \item Examine each detection is the catalog. Provided the $i-$th source in the catalog is detected at $(x_i, y_i)$ in the demodulated image, the distance between the extracted source and the true point source \begin{equation} \delta_i = \sqrt{\left(x_m - x_i\right)^2 + \left(y_m - y_i\right)^2} \end{equation} as well as the flux of the extracted source $f_i$ are investigated to determine whether the $i-$th source is a true source or not. We define the score of the current catalog in detecting the single point source as \begin{equation} d_k = \begin{cases} 1 & \exists \; i : \left(\delta_i \leq \Delta\right)\land \left(f_i \geq F_{thres}\right) \\ 0 & \text{otherwise} \end{cases}\text{,} \end{equation} where $k$ is the index of the current catalog, $\Delta$ and $F_{thres}$ are position accuracy and flux thresholds, therefore the score $d_k$ reveals whether the $k-$th catalog contains the true source or not, in other word, through the previous steps (simulated observation, denoising, demodulation, source extraction and thresholding) whether we have detected the true source effectively. If we have, the outcome of these steps is counted as an effective detection of the true source, otherwise it is ineffective. \label{item-examine-source-pts} \item Repeat the previous steps (from \ref{item-simulate-data-pts} to \ref{item-examine-source-pts}) $N$ times and calculate the percentage of effective detections amongst all detections, namely, \begin{equation} \eta = \frac{1}{N}\sum_k^N d_k\text{,} \end{equation} which is defined as the detection efficiency. Let $(x_k, y_k)$ be the position of the brightest source in the $k-$th catalog, the position accuracy is calculated as \begin{equation} \rho = \frac{1}{\eta N}\sum_k^N d_k \sqrt{\left( x_k - x_m \right)^2 + \left( y_k - y_m \right)^2} \text{.} \end{equation} \end{enumerate} \item Find an flux intensity $F_{0.5}$ so that the corresponding detection effeciency $\eta=50\%$. The intensity $F_{0.5}$ marks the point source sensitivity of the detecting system synthesised from both the telescope in specific status and the data analysis program chain. \end{enumerate} \subsection{Imaging and mapping} \subsubsection{Demodulation} Direct demodulation (DD) method \citep{li1994dd} is used to estimate the true image from observed data. Residual map calculated with CLEAN algorithm \citep{hoegbom1974} is used as lower limit constraint in DD method. The skewness of the residual map is calculated in each CLEAN iteration and its minimum absolute value serves as the main stopping criterion of iterations. \subsubsection{Cross-correlation} In contrary to detecting and extracting point sources from demodulated images, it is also feasible to do so from cross-correlated maps as long as the point sources are isolated with each other, compared to the FWHM of the PSF, since the position of a peak of the expected value of such a map coincides with the position of a source regardless of the PSF. Cross-correlating the observed data and the PSF yields the correlated map \begin{equation} \vect{C} = \vect{D} \star \vect{H}\text{,} \end{equation} where $star$ denotes cross correlation, $\vect{D}$ and $\vect{H}$ denote the observed data and the PSF respectively. The peak of $\vect{C}$ that coincides with a point source of flux intensity $F$ is \begin{equation} C = \frac{\tau \cdot A \cdot F}{\max\limits_{i,j}H_{i,j}}\sum_{i,j}H_{i,j}^2 \text{,} \end{equation} according to Eq. \ref{eq-modulation}. On the other hand, the background variance of the correlated map is \begin{equation} \sigma^2\left(\vect{C}\right) = \sum_{i,j}\sigma^2\left(\tau \cdot r_b \cdot H_{i,j}\right) = \tau \cdot r_b \cdot \sum_{i,j}H_{i,j}^2\text{,} \end{equation} since $\tau \cdot r_b$ follows Poisson distribution. Hence the significance of the peak in term of numbers of $\sigma$ is \begin{equation} \mathrm{SI} = \frac{F \cdot A \cdot \sqrt{\tau \cdot \sum_{i,j}H_{i,j}^2}} {\max\limits_{i,j} H_{i,j}\sqrt{r_b}} = \frac{F \cdot A \cdot \sqrt{T}}{\sqrt{r_b}} \sqrt{\langle P_{\tan}^2 \rangle}\text{,} \label{eq-significance} \end{equation} where $T$ is total duration of exposure on the 2-D pixel grid, $\sqrt{\langle P_{\tan}^2 \rangle}$ is the square root of the arithme mean of $P_{\tan}$ over the 2-D pixel grid, which is determined by the PSF as well as the range of the pixel grid only, provided the pixel grid is fine enough (see Eq. \ref{eq-pixelization}). Cross-correlation significance of isolated point source defined Eq. \ref{eq-significance} can be evaluated directly, given only the flux intensity of the source, the background count rate, the duration of exposure, the detection area and the PSF. Hence it is determined by the object (i.e., the point source), the telescope and the status of observation thus effects from data analysis programs are minimized. We use the cross-correlation significance as a reference. For example, in our simulations an isolated point source of $1\;\mathrm{mCrab}$ flux has $2.42\sigma$ significance. \subsection{Denoising} \label{sect-denoise-methods} \subsubsection{Linear methods} Gaussian smoothing is often used in digital image processing to suppress the noise at the cost of reduction in resolution. Trade-off between noise suppression and resolution conservation is adjusted through the standard deviation $\sigma$ of the Gaussian distribution serving as the smoothing kernel function. The best resolution of HXMT HE observed data is about $1.1^\circ$, limited by the FWHM of its narrow-field collimator. We set $\sigma$ to $28\mathrm{arcmin}$ so that the FWHM of the Gaussian kernel is also $1.1^\circ$. $N-$fold cross correlation transform ($N \ge 1$) can be used in DD method to regularize the ill-posed problem, more specifically speaking, to ensure the convergence as well as stability of the solution\citep{li2003dd}. Here we put this technique in the denoising category. We have tested $1-$fold and $2-$fold cross correlated DD methods respectively in this article. \subsubsection{Non-linear methods} Non-local means denoising \citep{buades2005non} is an edge-preserving non-linear denoising method. To increase its performance we have implemented this method with fast fourier transforms (FFTs). The pixel-wisely evaluation of the general Euclidean distance between the $i-$th pixel and other pixels of an image is replaced by \begin{equation} \vect{D}_i = \left\{\sum_k N_{i,k}\cdot W_k + \left(I^2 \ast W\right) - 2\left[I \ast \left(N_i \cdot W\right)\right] \right\}^\frac{1}{2}\text{,} \label{eq-nlmeans} \end{equation} where $N_i$ is neighbourhood of the $i-$th pixel, $N_{i,k}$ is the $k-$ pixel in the neighbourhood, $I$ is the image, $W$ is weight coefficient of the distance function. We use $7 \times 7$ pixels Gaussian kernel with standard deviation $\sigma=2$ (in pixels) as the weight coefficient $W$ in our simulations. We reduce the complexity of NLMeans method by computing convolutions in Eq. \ref{eq-nlmeans} with FFTs instead of using searching windows. \citep{buades2008non}. Median filter is another non-linear edge-preserving denoising method. This method is effective in removing salt-and-pepper noise in digital images. In HXMT observed data such noise is incurred typically by missing data or charged particles of cosmic rays. We fixed the size of the filter at $2^\circ \times 2^\circ$ (about $50 \times 50$ pixels) in HXMT HE data denoising. The last non-linear denoising method included in this article is the adaptive wavelet thresholding with multiresolution support \citep{starck2006}. The multiresolution support of a noisy image is a subset that contains significant coefficients only. So wavelet coefficients that dominated by noise are discarded. In this article we implemented the non-iterative algorithm. The $5 \times 5$ $B_3$ spline wavelet is used for multiresolution decomposition. \section{Simulation and results} \label{sect-simulations} \subsection{In-orbit background simulation} HXMT HE in-orbit background count rate $r_b$ ranges from $147.6\;\mathrm{counts/s}$ to $210.7\;\mathrm{counts/s}$ \citep{li2009bgrd}. We use a constant count rate $180\;\mathrm{counts/s}$ to simulate the average in-orbit background of HXMT HE in this article. \subsection{Source energy spectrum and telescope detection efficiency} We use the formula proposed by \citet{massaro2000crab} together with parameters fitted by \citet{jourdain2009crab} \begin{equation} F(E) = 3.87 \times E^{-1.79-0.134\log_{10}\left(\sfrac{E}{20}\right)} \end{equation} to model energy spectra of Crab-like sources from $20\;\mathrm{keV}$ to $250\;\mathrm{keV}$, where $E$ is in $\mathrm{keV}$ and the flux $F(E)$ is in $\mathrm{photons/s/cm^2}/\mathrm{keV}$. The detector efficiency of HXMT HE is derived from its simulated energy response, as shown in Fig. \ref{fig-hxmt-he-eff}. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{figures/hxmt-he-eff.eps} \caption{HXMT HE detection efficiency} \label{fig-hxmt-he-eff} \end{figure} Detection efficiency of HXMT HE to a Crab-like source is $67\%$. Count rate of HXMT HE corresponds to the $1\;\mathrm{Crab}$ intensity is $1\,112\;\mathrm{counts/s}$, given the detection area of HXMT HE is about $5\,100\;\mathrm{cm^2}$. \subsection{PSF and modulation} We use the diagram shown in Fig. \ref{fig-hxmt-he-psf} to simulate the PSF $P_{\tan}(u,v)$ on the tangent plane. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{figures/hxmt-he-psf.png} \caption{Simulated HXMT HE PSF} \label{fig-hxmt-he-psf} \end{figure} We use the concentric average of $P(\phi,\theta)$, namely, \begin{equation} S(\theta) = \frac{\int_{-\pi}^{\pi} P(\phi,\theta)\mathrm{d}\phi}{2\pi}\text{,} \label{eq-psf-slope} \end{equation} and the cumulative sum of $S(\theta)$, \begin{equation} C(\theta) = \int_0^{\theta} S\left(\theta'\right)\mathrm{d}\theta' \end{equation} to characterize the radial fade-out of the PSF and the concentration of the PSF respectively, as plotted in Fig. \ref{fig-hxmt-he-psf-sl}. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{figures/hxmt-he-psf-sl.eps} \caption{Radial distribution of the simulated HXMT HE PSF} \label{fig-hxmt-he-psf-sl} \end{figure} From Fig. \ref{fig-hxmt-he-psf-sl} we see that the FWHM of the simulated PSF is about $1.7^\circ$ while $99.7\%$ responses occur in a diameter of $11^\circ$. Despite the fact that the direct observed data is scientific events instead of 2-dimensional images, we start our simulation from \emph{simulated} observed data in the form of images defined on 2-dimensional cartesian pixel grid. We use a $22^\circ \times 22^\circ$ model image for simulations. Given the diameter of the PSF, the central $11^\circ \times 11^\circ$ region is fully modulated, that is, all contributions to observed data in this region are from the model image only. The surrounding $33^\circ \times 33^\circ$ region is partially modulated, i.e., part of the contributions to observed data is this region are from the model image. The average exposure per unit solid angle is $382\;\mathrm{s/deg^2}$ in HXMT half-year all-sky survey. The partially modulated region is discretized by $N \times N$ pixel grid, so $\tau \approx \sfrac{382 \times 33^2}{N^2}$. The detection area of each HXMT HE detector is approximately $300\;\mathrm{cm^2}$ so the total area of all the 17 HXMT HE detectors is $5\,100\;\mathrm{cm^2}$. \subsection{Results} We have implemented several denoising methods (see Section \ref{sect-denoise-methods}). Fig. \ref{fig-denoise-methods} shows denoising results by these methods. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{figures/denoise-methods.png} \caption{Denoising methods. From top to bottom, left to right:\ 1. Model image; 2. Observed data of $10\;\mathrm{mCrab}$ point source. 3. Gaussian smoothed data, $\sigma = 28\mathrm{arcmin}$. 4. 1-fold cross correlated data. 5. NLMeans filter denoised data. 6. $2^\circ \times 2^\circ$ median filter denoised data. 7. $4^\circ \times 4^\circ$ median filter denoised data. 8. $3\sigma$ wavelet thresholding denoised data, with $B_3$ spline wavelet transform.} \label{fig-denoise-methods} \end{figure} We have simulated $5\,000$ frames of observed data that contains the in-orbit background counts only for each methods to estimate the corresponding flux thresholds by the method specified in Procedure \ref{item-threshold} of Section \ref{sect-estimation}. From false detections we have obtained $2\sigma$ and $3\sigma$ flux thresholds, see Table \ref{tab-threshold} for results. \begin{table}[htbp] \centering \begin{tabular}{c|r|r} Denoising & $2\sigma$ thres., in $\mathrm{mCrab}$ & $3\sigma$ thres., in $\mathrm{mCrab}$ \\ \hline w/o denoise & $0.520\pm0.001$ & $0.963\pm0.003$\\ 1-fold CCT & $0.667\pm0.003$ & $1.192\pm0.010$\\ 2-fold CCT & $0.919\pm0.014$ & $1.656\pm0.058$\\ Gaussian, $\sigma = 28\mathrm{arcmin}$ & $0.669\pm0.003$ & $1.133\pm0.008$\\ NLMeans & $0.845\pm0.004$ & $1.340\pm0.011$\\ Median filter, $2^\circ$ & $0.709\pm0.004$ & $1.185\pm0.015$\\ Median filter, $4^\circ$ & $0.665\pm0.005$ & $1.114\pm0.012$\\ Wavelet thres. & $0.0518\pm0.0002$ & $0.216\pm0.002$\\ \end{tabular} \caption{$2\sigma$ and $3\sigma$ thresholds of point source detection} \label{tab-threshold} \end{table} We have simulated $1\,000$ frames of observd data that contains a Crab-like point source of $1\;\mathrm{mCrab}$, $1.25\;\mathrm{mCrab}$, $1.5\;\mathrm{mCrab}$, $1.75\;\mathrm{mCrab}$, $2\;\mathrm{mCrab}$, $2.5\;\mathrm{mCrab}$, $3\;\mathrm{mCrab}$, $4\;\mathrm{mCrab}$, $5\;\mathrm{mCrab}$ and $10\;\mathrm{mCrab}$ respectively, i.e., $10\,000$ frames of observed data in total. With these simulated data we have estimated location accuracies as well as detection efficiencies by the method described in Procedure \ref{item-efficiency} of the following methods: \begin{enumerate} \item DD without denoising, \item DD with $1$-fold cross correlation, \item DD with $2$-fold cross correlation, \item DD with Gaussian smoothing, where $\sigma=28'$, \item DD with NLMeans filtering, the size of the filter is $7 \times 7$ and $\sigma=2$ (both parameters are in pixels), \item DD with $2^\circ \times 2^\circ$ median filter, \item DD with $4^\circ \times 4^\circ$ median filter, and \item DD with adaptive wavelet thresholding. \end{enumerate} Implementation details of the above methods are in Sect. \ref{sect-denoise-methods}. Table \ref{tab-accuracy} shows the location accuracies on simulated data of $1\;\mathrm{mCrab}$, $2\;\mathrm{mCrab}$, $5\;\mathrm{mCrab}$ and $10\;\mathrm{mCrab}$ point sources. \begin{table}[htbp] \centering \begin{tabular}{c|rrrrrrrrrr} & $1\;\mathrm{mCrab}$ & $2\;\mathrm{mCrab}$ & $5\;\mathrm{mCrab}$ & $10\;\mathrm{mCrab}$\\ \hline w/o denoise & $95 \pm 6$ & $39 \pm 2$ & $9.6 \pm 0.2$ & $5.2 \pm 0.1$ \\ 1-fold CCT & $104 \pm 3$ & $53 \pm 1$ & $20.5 \pm 0.4$ & $10.6 \pm 0.2$ \\ 2-fold CCT & $149 \pm 5$ & $90 \pm 2$ & $36.9 \pm 0.8$ & $17.2 \pm 0.4$ \\ Gaussian, $\sigma = 28\mathrm{arcmin}$ & $90 \pm 4$ & $38 \pm 1$ & $11.1 \pm 0.2$ & $6.0 \pm 0.1$ \\ NLMeans & $107 \pm 4$ & $52 \pm 1$ & $21.0 \pm 0.3$ & $12.2 \pm 0.2$ \\ Median filter, $2^\circ$ & $91 \pm 4$ & $43 \pm 1$ & $14.3 \pm 0.3$ & $7.7 \pm 0.1$ \\ Median filter, $4^\circ$ & $100 \pm 3$ & $57 \pm 1$ & $23.3 \pm 0.4$ & $14.8 \pm 0.3$ \\ Wavelet thres. & $108 \pm 5$ & $48 \pm 2$ & $11.6 \pm 0.2$ & $5.9 \pm 0.1$ \\ \end{tabular} \caption{Location accuracies, in arc minutes.} \label{tab-accuracy} \end{table} Table \ref{tab-efficiency} shows the detection efficiencies on simulated data of $1\;\mathrm{mCrab}$, $1.5\;\mathrm{mCrab}$, $2\;\mathrm{mCrab}$, $2.5\;\mathrm{mCrab}$ and $3\;\mathrm{mCrab}$ point sources. Although the RL iteration itself we employed in DD method is total-counts-conservative, i.e., the sum of counts of each pixel is conserved after the iteration \citep{richardson1972}, the regularizations including the background constrains as well as various denoising techniques are not necessarily counts-conservative. As a result, the absolute flux threshold (Table \ref{tab-threshold}) for rejecting false detections doesn't reflect the sensitivity directly but the detection efficiency (Table \ref{tab-efficiency}) does. Errors of flux thresholds, location accuracies and detection efficiencies in Table \ref{tab-threshold}, Table \ref{tab-accuracy} and Table \ref{tab-efficiency} are calculated by bootstrapping. For example, following Procedure \ref{item-threshold} a set of $5\,000$ frames of demodulated images are obtained, from which false detections are calculated and a histogram is plotted, where both the $2\sigma$ and $3\sigma$ thresholds are determined. Now let's generate a new set with the same volume by resampling from the original set with replacement in order to calculate both the thresholds again. We repeat this resampling process until we get enough thresholds calculated to estimate their standard deviations. In Table \ref{tab-threshold}, Table \ref{tab-accuracy} and Table \ref{tab-efficiency} each of the errors is a standard deviation that calculated from $1\,000$ resampled sets. \begin{table}[htbp] \centering \begin{tabular}{c|rrrrrrrrrr} & $1\;\mathrm{mCrab}$ & $1.5\;\mathrm{mCrab}$ & $2\;\mathrm{mCrab}$ & $2.5\;\mathrm{mCrab}$ & $3\;\mathrm{mCrab}$\\ \hline w/o denoise & $29 \pm 2$ & $53 \pm 2$ & $77 \pm 1$ & $94 \pm 1$ & $98 \pm 0$ \\ 1-fold CCT & $41 \pm 2$ & $71 \pm 1$ & $92 \pm 1$ & $99 \pm 0$ & $100 \pm 0$ \\ 2-fold CCT & $27 \pm 2$ & $56 \pm 2$ & $79 \pm 2$ & $95 \pm 1$ & $99 \pm 0$ \\ Gaussian, $\sigma = 28\mathrm{arcmin}$ & $35 \pm 2$ & $66 \pm 2$ & $87 \pm 1$ & $98 \pm 0$ & $100 \pm 0$ \\ NLMeans & $36 \pm 2$ & $68 \pm 1$ & $91 \pm 1$ & $100 \pm 0$ & $100 \pm 0$ \\ Median filter, $2^\circ$ & $37 \pm 2$ & $64 \pm 2$ & $86 \pm 1$ & $97 \pm 1$ & $100 \pm 0$ \\ Median filter, $4^\circ$ & $39 \pm 2$ & $69 \pm 1$ & $89 \pm 1$ & $98 \pm 0$ & $100 \pm 0$ \\ Wavelet thres. & $27 \pm 1$ & $51 \pm 2$ & $79 \pm 1$ & $95 \pm 1$ & $100 \pm 0$ \\ \end{tabular} \caption{Detection efficiencies, in percentages.} \label{tab-efficiency} \end{table} A comprehensive summary of all the tested methods on all simulated data is shown in Fig. \ref{fig-single-source}. \begin{figure}[htbp] \centering \includegraphics[width=0.995\linewidth]{figures/single-source-faint.eps} \caption{Single source location accuracy and detection efficiency. MF stands for median filter while WT for wavelet thresholding.} \label{fig-single-source} \end{figure} \section{Conclusion} According to the results from our tests no denoising method shows significant advantage over $1$-fold cross correlated DD method in single point source detection efficiency. Therefore it's suggested that $1$-fold cross correlation should be the default regularization method for single point source detection in HXMT imaging data analysis. In the other hand, the location accuracy can be improved with alternative denoising methods, such as median filter, wavelet thresholding, Gaussian smoothing with little kernel, or without denoising, according to the results in this work. This article is focused on the single point source detection of HXMT imaging data analysis, where other interesting topics can not be all covered. Although the alternative denoising methods have been out-performed more or less by the default $1$-fold cross correlation in their contributions to the detection efficiency, they have shown certain advantages in location accuracy, and these features are promising for locating bright transients, multiple sources resolving, and so on. \section*{Acknowledgements} In this work we made use of SciPy\citep{scipy}, PyFITS and AIRE. PyFITS is a product of the Space Telescope Science Institute, which is operated by AURA for NASA. AIRE is a set of computing facilities initiated by Tsinghua Centre for Astrophysics. This work was supported by \emph{National Natrual Science Foundation of China} (NSFC) under grants No. 11373025, No. 11173038 and No. 11403014 as well as \emph{Tsinghua University Initiative Scientific Research Program} under grant No. 20111081102. \bibliographystyle{raa}
{ "timestamp": "2015-04-15T02:07:13", "yymm": "1504", "arxiv_id": "1504.03481", "language": "en", "url": "https://arxiv.org/abs/1504.03481" }
\section{Introduction} Experimental and theoretical studies of solitons in nonlinear lattices are the field of intensive activity in several branches of physics. Its large part is focused on nonlinear optics \cite{optics-review} and matter waves (i.e., Bose-Einstein condensates, BEC). The latter may be treated in the mean-field approximation \cite{BEC-original}, or in terms of the quantum Bose-Hubbard model \cite{BH,BH-review} (see book \cite{Maciek} for a systematic summary). In these contexts, the discrete nonlinear Schr\"{o dinger (DNLS) equation, and systems of the coupled DNLS equations, are ubiquitous dynamical models for the description of nonlinear lattices \cit {1}. In the framework of these studies, the existence, stability, and dynamics (including mobility) of discrete self-trapped modes (discrete solitons) is a topic which has drawn a great deal of interest. In multi-component systems, couplings between components may be linear, nonlinear, or both linear and nonlinear \cite{1}. In the context of optics, systems of linearly coupled DNLS equations are relevant to various applications. In particular, the linear coupling between two polarization modes in each core of a waveguide array may be induced by a twist of the core (for linear polarizations), or by the birefringence (for circular polarizations). On the other hand, in the BEC the linear coupling may be imposed by an external microwave or radio-frequency field, which can drive Rabi oscillations \cite{5,6} or Josephson oscillations \cite{7,8} between two boson populations. In terms of the dynamical analysis, an essential issue is the possibility of the spontaneous symmetry breaking in linearly coupled discrete systems \cite{Herring}. The progress in the fabrication of nano-scale electric circuits \cite{16}, lasers \cite{17}, waveguides and antennas, for microwave, tera-Hertz and optical frequency ranges, is another strong incentive stimulating studies of nonlinear lattices. Particularly significant nano-dimensional elements are self-organized quantum-dots (QDs) \cite{18} embedded into semiconductor dielectric host media. The possibility of strong interactions of QDs with optical fields gives rise to Rabi oscillations (RO) between electron-hole populations of ground and excited states. The basic RO\ model amounts to a two-level atom strongly coupled to an external electromagnetic field (the Jaynes - Cummings model) \cite{19}. For more complex structures, such as large molecules \cite{20} or QDs, this model has been modified to account for additional factors: anisotropy \cite{20}, local-field corrections \cite{21}-\cite{24}, broken inversion symmetry \cite{25}, etc. In these settings, crucially significant are collective coherent effects, stipulated by the inter-dot coupling inside the QD array. Oscillations of the QD population between levels in an isolated two-level system may be considered as energy exchange between the system and the ambient electromagnetic field. On the other hand, particle transfer (e.g., inter-dot electron, or electron-hole tunnelling) leads to the exchange of the quasi-momentum between charge carriers and photons. These mechanisms govern the spatiotemporal RO dynamics in the QD chains, in the form of the propagation of travelling RO waves and wave packets (\textit{Rabi waves}) \cite{26}-\cite{28}. In the QDs arrays the transport can also be realized by tunnelling via the inter-dot dipole-dipole coupling, such as F\"{o}rster interactions \cite{18 , and by the radiation-field transfer. In Ref. \cite{27}, the tunnelling was assumed to be a dominant mechanism of the inter-dot coupling. In that case, a significant role is played by local-field effects in the QDs, even in the weak-coupling regime \cite{21,22}. The local-field effects, enhanced by the strong light-QD coupling, modify the conventional RO dynamics \cite{26}-\cit {28}. As a result, specific nonlinear terms appear in equations of electron-hole oscillations inside the QDs, braking the superposition principle for the single-particle wave function. The role of local fields in the formation of excitonic RO in self-assembling QDs was experimentally studied in Refs. \cite{29,30}. The nonlinearity affects the Rabi-wave propagation too. As was shown in Ref. \cite{31}, solitons and self-trapped breathers in the one-dimensional (1D) QD chains can be created with the help of the nonlinearity. The model developed in Ref. \cite{31} explores the dynamics of the probability amplitudes of the ground and excited states of QDs, with included nonlinear cross-phase-modulation terms and linear coupling between the two states. The phase shift between constants of the inter-site hopping is a new property introduced geometrically in that 1D setting. Recently, theoretical and experimental studies of nano-antennas have been in a focus of work performed by many groups. The antenna is defined as a device which transforms the near field into the far field, or vice versa (transmitting and receiving antennas, respectively). Transmitting devices produce the spherical wave in a remote spatial area, with electric field \begin{equation} \mathbf{E}_{\mathrm{Rad}}=\frac{\mathbf{e}_{\theta }}{4\pi \epsilon _{0} \frac{\omega ^{2}}{c^{2}}\mathbf{E}_{0}\frac{e^{-i\omega (t-R/c)}}{R F(\theta ,\varphi )+c.c., \label{eq1} \end{equation where $R,\theta ,\varphi $ are the spherical coordinates, with the origin set at the center of the antenna, $F(\theta ,\varphi )$ is a normalized angular radiation pattern, $\epsilon _{0}$ is the vacuum permittivity, $c$ is the free-space light velocity, $\mathbf{e}_{\theta }$ is the unit vector along the local direction of coordinate $\theta $, with $\mathbf{E}_{0}$ and $\omega $ being the amplitude and frequency of the electric field, respectively. The purport of using the antennas is their ability to provide an interface between local information processing, which uses electrical signals, and the free-space wireless transmission of data encoded in various parameters of the electromagnetic waves, such as the amplitude, phase and frequency. Directional antenna properties (the dependence of the emitted field on both the azimuthal and polar angles) are characterized by the radiation pattern. It is determined by the near-field distribution produced by the signal source placed inside the antenna. It depends on the frequency, antenna configuration and geometric parameters. An important species of the device is represented by phased antenna arrays, i.e., systems of a large number of identical emitters with a phase shift between adjacent ones \cite{32}. The progress in the fabrication of nano-antennas, see Refs. \cite{33}-\cite{35} for review, manifests the general trend of implementing radio-communication principles in the optical-frequency range. In particular, it stimulates bridging the realms of macroscopic antennas and nano-antennas, using new materials with specific electronic properties. In this context, it is relevant to mention carbon nanotubes and nanotube arrays \cite{36}-\cite{39}, plasmonic noble-metal wires \cite{40,41}, graphene nanoribbons \cite{42}, semiconductor QDs \cite{43,44}, etc. These types of nano-antennas have been offered as promising elements for industrial design and commercial manufacturing. However, many limitations for their use are stipulated by difficulties of their operational control -- for instance, rotation of the radiation pattern by an electric or optical drive. This difficulty, along with the growing interest to the field, motivate the search of new physical principles for fabrication and tuning of nano-antennas. A promising possibility for that is suggested by using strong nonlinearities in nanostructures (in particular, remarkable nonlinear properties of graphene \cite{3new,4new}). In this work, we propose a previously unexplored principle for the realization of nano-antenna arrays in the form of self-trapping \emph discrete solitons }in 2D nonlinear lattices built of semiconductor QDs. In this context, dynamically stable soliton structures (fundamental discrete solitons, vortex solitons, and breathers) may provide promising mechanisms of controllable creation of the antennas with various geometric shapes. The model for the 2D array of QDs can be developed from its 1D counterpart which was elaborated in Ref. \cite{31}; obviously, the transition from 1D to the 2D geometry is a crucially important step for modelling antennas which emit in the plane of the semiconductor structure. We demonstrate that different types of the global drive, \textit{viz}., plane and cylindrical waves, can excite Rabi oscillations and waves in the system. Using known numerical techniques developed for the study of discrete solitons \cite{1}, we construct basic localized modes of the corresponding 2D DNLS equations and test their stability and mobility. In addition, we demonstrate how discrete Rabi solitons induce the dielectric polarization, and thus build the near-field of the soliton-based nano-antenna. On the contrary to other radiation zones, the near field keeps itself in a quantum state, which helps to establish the correlation of quantum effects and field transformation between different spatial zones. We also present the far-field structure produced by the given near field, which takes into account the quantum nature of the underlying Rabi oscillator. The paper is organized as follows. In Sec. II, we introduce the model for the 2D QD array interacting with the driving electromagnetic field. As mentioned above, the plane- and cylindrical-wave drives are considered. The nonlinearity is induced as the local-field effect. The model is analyzed numerically, with the purpose to reveal the RO dynamics in the QD arrays. In Sec. III, necessary technical ingredients are introduced, including the dispersion relation for the linearized system, and general expressions for radiation properties generated by solitons in the 2D QD array. In Sec. IV, results are presented in detail, and the relation of the present model to 2D nano-antenna arrays is proposed. The important issue of maintaining the coherence of emitters building the soliton nano-antenna is considered in Sec. IV too, and possibilities for operational control of the device by means of optical tools are outlined. The paper is summarized in Sec. V. \section{Two-dimensional model equations} \subsection{The plane-wave drive} We consider finite 2D rectangular $N_{1}\times N_{2}$ array of identical QDs with the square unit cell of size $a$ $\times $ $a$ ($a$ is the inter-dot distance). The position of QDs is defined by a pair of discrete indices, p=-N_{1}/2,...,N_{1}/2$ and $q=-N_{2}/2,...,N_{2}/2$. The array is exposed to the classical travelling optical wave of in the $xy$- plane with electric fiel \begin{equation} \mathbf{E}(x,y,t)\mathbf{=}\mathrm{Re}\left\{ \mathbf{E}_{0}\exp \left[ ik\left( x\cos \theta _{0}+y\sin \theta _{0}\right) -i\omega t\right] \right\} , \label{eqe} \end{equation} where $k=k(\omega )$ is the wavenumber and $\theta _{0}$ is the propagation angle (see Fig. \ref{waves} (a)). The driving field may represent different types of the electromagnetic modes, the first one being a plane wave with a nonzero $E_{z} $ component. Another example is a surface plasmon guided by the plane boundary between the noble metal and transparent medium \cite{18}, on which the QD array is installed. The latter example comes with the frequency dispersion of a plasma medium, which manifests itself in a nonlinear dependence $k=k(\omega )$. We take QDs as identical two-level dissipationless systems with the transition energy, $\hbar \omega _{0}$, corresponding to the transition between excited and ground-state electron orbitals, $|a_{p,q}\rangle $ and $\,|b_{p,q}\rangle $. The transition dipolar moment is $\mathbf{\mu }=\mathbf{e}_{z}\mu $, and only intraband transitions are taken into account \cite{27}. Each QD is coupled by the electron tunnelling to four nearest neighbors in the 2D array. \begin{figure}[h] \center\includegraphics[width=13cm]{figure0.eps} \caption{ Schematically presented plane wave (a) and cylindrical wave (b) drive of 2D QD array placed in the $(x,y)$ plane. Dotted curves display wave fronts in both cases, $\vec{k}$ is the wave-vector, $\theta_0$ is the propagation angle and $a$ is the inter-dot distance. } \label{waves} \end{figure} It is adopted that the light interaction within the QD chain takes place in the resonant regime, i.e., the frequency detuning is small in comparison to both the light and quantum-transition frequencies. Following the rotation-wave-approximation \cite{19}, we omit rapidly oscillating terms in the equations of motion. These natural assumptions follows the Rabi-waves model formulated in Ref. \cite{27}. The raising, lowering, and population operators of the QD by denoted as \hat{\sigma}_{p,q}^{+}=|a_{p,q}\rangle \langle b_{p,q}|,\,\hat{\sigma _{p,q}^{-}=|b_{p,q}\rangle \langle a_{p,q}|,$ and $\hat{\sigma _{zp,q}=|a_{p,q}\rangle \langle a_{p,q}|-|b_{p,q}\rangle \langle b_{p,q}|$, respectively. The total Hamiltonian is \begin{equation} \hat{H}=\hat{H}_{d}+\hat{H}_{df}+\hat{H}_{T}+\Delta \hat{H}, \label{eq2} \end{equation where the term \begin{equation} \hat{H}_{d}=\frac{\hbar \omega _{0}}{2}\sum_{p,q}{\hat{\sigma}_{zp,q}} \label{eq3} \end{equation describes the free electron motion, while the one \begin{equation} \hat{H}_{df}=-(\mathbf{\mu E}_{0})\sum_{p,q}\hat{\sigma}_{p,q}^{+}\exp (i(p\phi _{1}+q\phi _{2}))}+\mathrm{H.c}. \label{eq4} \end{equation ($\mathrm{H.c.}$ stands for Hermitian-conjugate operator) accounts for the interaction of QD array with the electromagnetic field in the dipole approximation without tunnelling. Here, phases \begin{equation} \phi _{1}=(ka/2)\cos {(\theta )},\phi _{2}=(ka/2)\sin {(\theta )}. \label{eq5} \end{equation represent the field delay per lattice period due to the finite propagation speed. The limit of $\phi _{1,2}\rightarrow 0$ corresponds to the dense lattice, with $ka\gg 1$. The third term in Eq. (\ref{eq2}) corresponds to the inter-dot coupling through the tunnelling. To interpret, it we introduce tunnelling coupling factors $\xi _{p,q}^{(a,b)}$ which are step functions of the discrete spatial coordinates, $p$ and $q$ \begin{equation} \xi _{p,q}^{(a,b)}=\left\{ \begin{array}{ll} \xi ^{(a,b)}, & p,q=-(N_{1,2}/2),...-1,0,1,...,N_{1,2}/2, \\ 0, & \mbox{in all other cases} \end{array \right. \label{step} \end{equation} The Hamiltonian of the tunnelling interaction is, in the tight-binding approximation, \begin{eqnarray} \hat{H}_{T}=-\hbar \sum_{p,q}(\xi _{p+1,q}^{(a)}|a_{p,q}\rangle \langle a_{p+1,q}|+\xi _{p,q}^{(a)}|a_{p,q}\rangle \langle a_{p-1,q}|+\xi _{p,q+1}^{(a)}|a_{p,q}\rangle \langle a_{p,q+1}|+\xi _{p,q}^{(a)}|a_{p,q}\rangle \langle a_{p,q-1}|) \nonumber \\ -\hbar \sum_{p,q}(\xi _{p+1,q}^{(b)}|b_{p,q}\rangle \langle b_{p+1,q}|+\xi _{p,q}^{(b)}|b_{p,q}\rangle \langle b_{p-1,q}|+\xi _{p,q+1}^{(b)}|b_{p,q}\rangle \langle b_{p,q+1}|+\xi _{p,q}^{(b)}|b_{p,q}\rangle \langle b_{p,q-1}|). \label{eq7} \end{eqnarray Coefficients $\xi^{(a,b)}$ in Eq. (\ref{eq7}) are accounted for the tunnelling coupling between adjacent QDs in the excited and ground states, respectively. They are defined as scalars due to the isotropy of individual QDs and the array. The step-like structure in Eq. (\ref{step}) corresponds to the operator notation for the tunnelling in the finite-size array: the number of the adjacent QDs coupled by the tunnelling reduces up to three for sites at \textit{edges} of the array, and to two at \textit{corners}. The last term in Hamiltonian (\ref{eq2})\ represents the local-field effects, in the Hartree-Fock-Bogoliubov approximation \cite{21}, \cite{31}: \begin{equation} \Delta \hat{H}=\frac{4\pi }{V}N_{\alpha ,\beta }\mu _{\alpha }\mu _{\beta }\sum_{p,q}\left( \hat{\sigma}_{p,q}^{-}\langle \hat{\sigma _{p,q}^{+}\rangle +\hat{\sigma}_{p,q}^{+}\langle \hat{\sigma _{p,q}^{-}\rangle \right) , \label{eq8} \end{equation where $\mu _{\alpha ,\beta }$ and $N_{\alpha ,\beta }$ are components of the dipolar-moment vector and depolarization tensor of the single QD, $V$ is the volume of the single QD, and angle brackets denote averaging of the corresponding operator with respect to the given quantum state. The summations over doubly repeated indices have been omitted in accordance with the usual convention. The depolarization tensor depends both on the QD configuration and quantum properties of electron-hole pairs, given by \begin{equation} N_{\alpha ,\beta }=\frac{V}{4\pi }\int_{V}\int_{V}|\xi (\mathbf{r )|^{2}\left\vert \xi \left( \mathbf{r}^{\prime }\right) \right\vert ^{2}G_{\alpha ,\beta }\left( \mathbf{r}-\mathbf{r}^{\prime }\right) d^{3 \mathbf{r}\,d^{3}\mathbf{{r}^{\prime },} \label{eq9} \end{equation where $\xi (\mathbf{r})$ is the envelope of the wave-function of electron-hole pair, and $G_{\alpha ,\beta }(\mathbf{r}-\mathbf{r}^{\prime })$ is the Green's tensor of the Maxwell's equations in the quasi-static limit \cite{18}. The present formulation may be applied to 1D and 2D setting alike \cite{28}. Using the Hartree-Fock-Bogoliubov approximation, local-field effects for the single QD, were investigated in the strong coupling regime in Refs. \cite{21}, and \cite{31}, and thereafter demonstrated experimentally \cite{1new}. The temporal evolution of single-particle excitations is governed by the Sch \"{o}dinger equation, \begin{equation} i\hbar \frac{\partial |\Psi \rangle }{\partial t}=\hat{H}|\Psi \rangle . \label{eq10} \end{equation The unknown wave function is taken in the form of a coherent superposition, \begin{equation} |\Psi \rangle =\sum_{p,q}(\Psi _{p,q}(t)\,e^{\left( i/2\right) \left( p\phi _{1}+q\phi _{2}-\omega \,t\right) }|a_{p,q}\rangle +\Phi _{p,q}(t)\,e^{-\left( i/2\right) \left( p\phi _{1}+q\phi _{2}-\omega t\right) }|b_{p,q}\rangle ), \label{eq11} \end{equation where $\Psi _{p,q}(t),\,\Phi _{p,q}(t)$ are probability amplitudes to be found. Projection of the Schr\"{o}dinger equation onto the chosen basis leads to the following system of coupled nonlinear evolution equations for the probability amplitudes: \begin{eqnarray} \frac{\partial \Psi _{p,q}}{\partial t}&=&iF\Psi _{p,q} \nonumber \\ &+& i\left( \xi _{p,q}^{(a)}\Psi _{p-1,q}e^{-i\varphi _{1}}+\xi _{p+1,q}^{(a)}\Psi _{p+1,q}e^{i\varphi _{1}}+\xi _{p,q}^{(a)}\Psi _{p,q-1}e^{-i\varphi _{2}}+\xi _{p,q+1}^{(a)}\Psi _{p,q+1}e^{i\varphi _{2}}\right) \nonumber \\ &-&ig\Phi _{p,q}-i\Delta \omega |\Phi _{p,q}|^{2}\Psi _{p,q}, \nonumber \\ \frac{\partial \Phi _{p,q}}{\partial t}&=&-iF\Phi _{p,q} \nonumber \\ &+&i\left( \xi _{p,q}^{(b)}\Phi _{p-1,q}e^{i\varphi _{1}}+\xi _{p+1,q}^{(b)}\Phi _{p+1,q}e^{-i\varphi _{1}}+\xi _{p,q}^{(b)}\Phi _{p,q-1}e^{i\varphi _{2}}+\xi _{p,q+1}^{(b)}\Phi _{p,q+1}e^{-i\varphi _{2}}\right) \nonumber \\ &-&ig\Psi _{p,q}-i\Delta \omega |\Psi _{p,q}|^{2}\Phi _{p,q}, \label{eq12} \end{eqnarray where $F$ is the detuning parameter \cite{31}, $g\equiv -\mathbf{\mu E _{0}/(2\hbar )$ is QD-field coupling factor, and $\Delta \omega \equiv 4\pi \mu _{\alpha }\mu _{\beta }N_{\alpha ,\beta }/(\hbar V)$ is the depolarization shift. The normalization condition for system (\ref{eq12}) is \begin{equation} \sum_{p=-N_{1}/2}^{N_{1}/2}\sum_{q=-N_{2}/2}^{N_{2}/2}(|\Psi _{p,q}|^{2}+|\Phi _{p,q}|^{2})=1. \label{eq13} \end{equation The observable value of the energy is given by \begin{eqnarray} \varepsilon &\equiv& \langle \hat{H}\rangle =\frac{1}{2} \sum_{p=-N_{1}/2}^{N_{1}/2}\sum_{q=-N_{2}/2}^{N_{2}/2}\left[ -F(|\Psi _{p,q}|^{2}-|\Phi _{p,q}|^{2})\right. \nonumber \\ &-&\Phi _{p,q}^{\ast }\left( \xi _{p,q}^{(b)}\Phi _{p-1,q}e^{i\varphi _{1}}+\xi _{p+1,q}^{(b)}\Phi _{p+1,q}e^{-i\varphi _{1}}+\xi _{p,q}^{(b)}\Phi _{p,q-1}e^{i\varphi _{2}}+\xi _{p,q+1}^{(b)}\Phi _{p,q+1}e^{-i\varphi _{2}}\right) \nonumber \\ &-&\Psi _{p,q}^{\ast }\left( \xi _{p,q}^{(a)}\Psi _{p-1,q}e^{-i\varphi _{1}}+\xi _{p+1,q}^{(a)}\Psi _{p+1,q}e^{i\varphi _{1}}+\xi _{p,q}^{(a)}\Psi _{p,q-1}e^{-i\varphi _{2}}+\xi _{p,q+1}^{(a)}\Psi _{p,q+1}e^{i\varphi _{2}}\right) \nonumber \\ & & \left. +g\Psi _{p,q}^{\ast }\Phi _{p,q}+\frac{1}{2}\Delta \omega |\Psi _{p,q}|^{2}|\Phi _{p,q}|^{2}+\mathrm{c.c.}\right] . \label{eq14} \end{eqnarray The analysis of the Rabi solitons in the 2D QD lattices is developed below on the basis of Eq. (\ref{eq12}). In the simplest case, one may assume the inter-site coupling coefficients for the ground and excited states to be equal, $\xi ^{(a)}=\xi ^{(b)}=\xi $. Next, we set, by means of an obvious rescaling, $g=-1$ and $\mathrm{sign (\Delta \omega )=-1$. These signs imply the attractive onsite linear interaction between fields $\Psi _{p,q}$ and $\Phi _{p,q}$, and the self-focusing sign of the XPM (cross-phase-modulation, i.e., the cubic interaction between the different components) onsite nonlinearity. Actually, if $g$ is originally positive, it can be made negative by substitution $\Phi _{p,q}=-\tilde{\Phi}_{p,q}$. And if $\Delta \omega $ is originally positive, it may be made negative by means of the usual staggering substitution \cit {1}. Thus, in Eqs. (\ref{eq12}) there remain three independent parameters which can be combined into the frequency detuning, $F$, and the complex lattice coupling, $\xi \exp {(i\phi _{1,2})}$. We here restrict $\phi _{1,2}$ to the basic interval, $0\leq \phi _{1,2}\leq \pi /2$, therefore both positive and negative values of $F$ should be considered. \subsection{The cylindrical-wave drive} The spatial structure of Rabi waves is tunable by selecting the corresponding driving field. Of particular interest is cylindrical drive, defined as \begin{equation} \mathbf{E}(\mathbf{r},t)=\mathbf{e}_{z}\mathrm{Re}\left\{ \mathbf{E _{0}\,H_{0}^{(1)}(k|\mathbf{r}-\mathbf{r}_{0}|)\exp {(-i\omega \,t)}\right\} , \label{eq15} \end{equation where $H_{0}^{(1)}(x)$ is the Hankel function of the zero order and the first type. Such field may be excited by the infinitely thin and infinitely long current wire placed at point $\mathbf{r}_{0}$ normally to the array's plane (see Fig. \ref{waves} (b)). The current may be produced, for example, by a semiconductor quantum wire excited by the exciton-polariton \cite{45}. The current point $\mathbf{ }_{0}$ is placed at the center of the cell consisting of dots with the indices $(p=0,q=0),(p=-1,q=-1),(p=0,q=-1)$ and $(p=-1,q=0)$. The general statement of the problem is similar to the plane-wave case, the difference being only in the form of the $H_{df}$ component of Hamiltonian (\ref{eq1}), which is, now, \begin{equation} \hat{H}_{df}=-\mathbf{\mu e}_{z}E_{0}\sum_{p,q}\hat{\sigma _{p,q}^{+}S_{p,q}+H.c, \label{eq16} \end{equation where \begin{equation} S_{pq}\equiv \frac{H_{0}^{(1)}\left( ka\sqrt{(p+1/2)^{2}+(q+1/2)^{2}}\right) }{H_{0}^{(1)}\left( ka/\sqrt{2}\right) }. \label{eq17} \end{equation} The corresponding equations of motion are \begin{eqnarray} \frac{\partial \Psi _{p,q}}{\partial t}=iF\Psi _{p,q} \nonumber \\ +i\left[ \xi _{p,q}^{(a)}\Psi _{p-1,q}\varsigma _{pq}^{p-1,q}+\xi _{p+1,q}^{(a)}\Psi _{p+1,q}\varsigma _{pq}^{p+1,q}+\xi _{p,q}^{(a)}\Psi _{p,q-1}\varsigma _{pq}^{p,q-1}+\xi _{p,q+1}^{(a)}\Psi _{p,q+1}\varsigma _{pq}^{p,q+1}\right] \nonumber \\ -ig|S_{pq}|\Phi _{p,q}-i\Delta \omega |S_{pq}||\Phi _{p,q}|^{2}\Psi _{p,q}, \nonumber \\ \frac{\partial \Phi _{p,q}}{\partial t}=-iF\Phi _{p,q} \nonumber \\ +i\left[ \xi _{p,q}^{(b)}\Phi _{p-1,q}\varsigma _{pq}^{*p-1,q}+\xi _{p+1,q}^{(b)}\Phi _{p+1,q} \varsigma_{pq}^{*p+1,q} +\xi _{p,q}^{(b)}\Phi _{p,q-1} \varsigma _{pq}^{*p,q-1}+\xi _{p,q+1}^{(b)}\Phi _{p,q+1} \varsigma _{pq}^{*p,q+1} \right] \nonumber \\ - ig|S_{pq}|\Psi _{p,q}-i\Delta \omega |S_{pq}||\Psi _{p,q}|^{2}\Phi _{p,q}, \label{eq18} \end{eqnarray} \begin{equation} \varsigma _{pq}^{mn}\equiv \sqrt{\frac{S_{mn}}{S_{pq}}}, \label{eq19} \end{equation} supplemented by the normalization condition: \begin{equation} \sum_{p=-N_{1}/2}^{N_{1}/2}\sum_{q=-N_{2}/2}^{N_{2}/2}\left( |\Psi _{p,q}|^{2}+|\Phi _{p,q}|^{2}\right) |S_{pq}|=1. \label{eq20} \end{equation Actually, Eq. (\ref{eq18}) describes other types of driving field too. Namely, substitution $k\rightarrow i\,k$ (for real $k$) transforms the Hankel function into the correspondent Macdonald function $K_{0}(x)$, which represents driving the Rabi wave by surface plasmons guided by a noble-metal wire \cite{18}, or carbon nanotube \cite{46}. \section{2D Rabi solutions} \subsection{Dispersion relations} In the limit of $\Delta \omega =0$, Eq. (\ref{eq12}) without the local-field terms reduces to the 2D generalization of the Rabi-wave model considered in Refs. \cite{26,28}. In this case, the dispersion relation can be derived by looking for solutions as \begin{equation} \left\{ \Psi _{p,q},\Phi _{p,q}\right\} =\{A,B\}\exp \left( {i(\kappa _{x}\,p+\kappa _{y}\,q)}\right) \exp {(-i\Omega \,t)}, \label{eq21} \end{equation where $\kappa _{x,y}$ are wavenumbers, $A,B$ are unknown wave amplitudes, and $\Omega $ is an unknown frequency. A straightforward analysis yields two branches of the dispersion relation, \begin{eqnarray} \Omega _{1,2}(\kappa _{x},\kappa _{y}) &=&-2\xi \left[ \cos {(\kappa _{x}) \cos {(\phi _{1})}+\cos {(\kappa _{y})}\cos {(\phi _{2})}\right] \nonumber \\ &\pm &\sqrt{g^{2}+\left\{ F-2\xi \left[ \sin {(\kappa _{x})}\sin {(\phi _{1} }+\sin {(\kappa _{y})}\sin {(\phi _{2})}\right] \right\} ^{2}}, \label{disp} \end{eqnarray examples of which are shown in Fig. \ref{dispersion} for $g=-1$ and $\xi =1$. In the limit of $\phi _{1}=\phi _{2}=\phi =0$, Eq. (\ref{disp}) goes over into the known dispersion relation for the usual system of linearly coupled 2D DNLS equations, cf. Ref. \cite{1}. It features two similar branches shifted by a constant, $\Delta \Omega =2\sqrt{g^{2}+F^{2}}$, and corresponds to the Rabi waves. In the case of $\phi _{1},\,\phi _{2}\neq 0$, shift \Delta \Omega $ is no longer constant, as it depends on wavenumbers $\kappa _{x}$ and $\kappa _{y}$. Note that \begin{equation} \Omega (\kappa _{x},\kappa _{y},\phi _{1},\phi _{2})=\Omega (-\kappa _{x},-\kappa _{y},-\phi _{1},-\phi _{2}), \label{eqp1} \end{equation but \begin{equation} \Omega (\kappa _{x},\kappa _{y},\phi _{1},\phi _{2})\neq \Omega (-\kappa _{x},-\kappa _{y},\phi _{1},\phi _{2}). \label{eqp2} \end{equation Equation (\ref{eqp1}) shows that the inversion of the propagation direction of the driving field inverts the direction of the Rabi-wave propagation. This is according to the non-reciprocity of Rabi-waves in 1D chains, noted in Ref. \cite{43}. \begin{figure}[h] \center\includegraphics[width=9cm]{figure1.eps} \caption{(Color online) Dispersion curves $\Omega=\Omega _{2}(\protect\kappa _{p},\protect\kappa _{q})$ for the linearized system with the plane-wave excitation: $F=0$ (black surfaces), $F=1$ (red surfaces), and $\protect\phi _{1}=\protect\phi _{2}=0$ (a), $\protect\phi _{1}=\protect\phi _{2}=\protec \pi /4$ (b), $\protect\phi _{1}=0,\,\protect\phi _{2}=\protect\sqrt{2 \protect\pi /4$ (c). Other coefficients are $\protect\xi =1$ and $g=-1$. Discrete solitons are expected to exist in the semi-infinite gap. } \label{dispersion} \end{figure} \subsection{Constructing Rabi solitons and their radiation patterns} Equations (\ref{eq12}) may give rise to stationary discrete 2D fundamental solitons and vortices, looking for localized solutions as \begin{equation} \Psi _{p,q}=e^{-i\Omega t}A_{p,q},~\Phi _{p,q}=e^{-i\Omega t}B_{p,q}, \label{rabisol} \end{equation where $\Omega $ is the corresponding carrier frequency, and A_{p,q},\,B_{p,q}$ are localized complex lattice fields vanishing at infinity (in terms of the numerical solutions, they vanish at boundaries of the computation domain). Stationary soliton solutions of Eqs. (\ref{eq12}) were numerically obtained by adopting the nonlinear equation solver based on the Powell method while direct dynamical simulations were based on the Runge-Kutta procedure of the sixth order, \cite{47}, \cite{48}. The numerical solution was constructed for a finite lattice of size $21\times 21$ (unless stated otherwise), with periodic boundary conditions. We have checked that a larger lattice produces the same results. The emitted field is described by the field operator in the Heisenberg representation \cite{19}, \begin{equation} \hat{\mathbf{E}}(\mathbf{r},t)=-\frac{1}{4\pi \epsilon _{0}}(\nabla \cdot \nabla -\frac{1}{c^{2}}\frac{\partial ^{2}}{\partial t^{2}})\int_{V}\frac{1} |\mathbf{r}-\mathbf{r}^{\prime }|}\hat{\mathbf{P}}(t-\frac{|\mathbf{r} \mathbf{r}^{\prime }|}{c})d^{3}\mathbf{r}^{\prime }. \label{eq22} \end{equation Here, operator $\hat{\mathbf{P}}$ of the induced polarization describes the displacement current induced by the soliton's profile, which may be considered as an external source, from point of view of the antenna. The onsite polarization operator, written in terms of creation-annihilation operators is \begin{equation} \hat{\mathbf{P}}_{p,q}(t)=\mathbf{\mu }\hat{\sigma}_{p,q}^{+}(t)+\mathrm{H.c } \label{eq23} \end{equation Due to the smallness of the field produced by the soliton in the QD array, its emission is determined by the linear approximation, hence the total field is built as a superposition of partial fields emitted by different QDs independently. The plane-wave drive is, thus, \begin{eqnarray} \hat{\mathbf{E}}(\mathbf{r},t) &=&-\frac{V}{4\pi \epsilon _{0}}\left( \nabla \cdot \nabla -\frac{1}{c^{2}}\frac{\partial ^{2}}{\partial t^{2}}\right) \mathbf{e}_{z} \nonumber \\ &&\times \sum_{p=-N_{1}/2}^{N_{1}/2}\sum_{q=-N_{2}/2}^{N_{2}/2}\frac{\hat{P _{zp,q}\left( t-c^{-1}\left\vert \mathbf{r}-\mathbf{e}_{x}pa-\mathbf{e _{y}qa\right\vert \right) }{\left\vert \mathbf{r}-\mathbf{e}_{x}pa-\mathbf{e _{y}qa\right\vert }. \label{eq24} \end{eqnarray To obtain the observable value of field, it is necessary to replace the polarization operator by its main value with respect to the quantum state \ref{eq11}), $\langle \Psi |\hat{P}_{zp,q}|\Psi \rangle $. The result is \begin{equation} P_{zp,q}(t)=\langle \hat{P}_{sp,q}(t)\rangle =\frac{\mu }{V}\Psi _{p,q}(t)\Phi _{p,q}^{\ast }\exp \left[ {i(p\phi _{1}+q\phi _{2})}\right] \exp {(-i\omega \,t)}+\mathrm{c.c.} \label{eq25} \end{equation It is convenient to present the far-zone field in the spherical coordinates with the origin set at the central point of the cell with $p=q=0$, defined as $x=R\sin {(\theta )}\,\cos {(\varphi )},\,y=R\sin {(\theta )}\,\sin (\varphi )},\,z=R\cos {(\theta )}$. The terms of order $O(R^{-2})$ should be omitted, and the radial (longitudinal) component of the electric field vanishes too. As a result, we obtain, for the quasi-spherical observable field: \begin{equation} \mathbf{E}_{\mathrm{Rad}}=\lim_{R\rightarrow \infty }\mathbf{{E}=}\frac{\mu \mathbf{e}_{\theta }}{4\pi \epsilon _{0}}\frac{\omega ^{2}}{c^{2}}\frac e^{-i\omega (t-R/c)}}{R}F\left( \theta ,\varphi ,\phi _{1},\phi _{2};t-\frac R}{c}\right) +\mathrm{c.c.}\mathbf{,} \label{eq26} \end{equation with the radiation pattern \begin{eqnarray} F(\theta ,\varphi ,\phi _{1},\phi _{2};t-\frac{R}{c}) =\sin {\theta \sum_{p=-N_{1}/2}^{N_{1}/2}\sum_{q=-N_{2}/2}^{N_{2}/2}\Psi _{p,q}(\tilde{t )\Phi _{p,q}^{\ast }(\tilde{t}) \nonumber \\ \times \exp \{ip\left[ \frac{\omega \,a}{c}\sin {\theta }\cos {\varphi +2\phi _{1}\right] +iq\left[ \frac{\omega \,a}{c}\sin {\theta }\sin {\varphi }+2\phi _{2}\right] \}, \label{antplanar} \end{eqnarray where $\tilde{t}\equiv t-R/c+c^{-1}\sin {\theta }\left[ p\cos {\varphi +q\,\sin {\varphi }\right] $. We stress that, in contrast with classical macroscopic antennas, the radiation pattern given by Eq. (\ref{antplanar}) is non-steady, as it exhibits a slow dependence on the spatiotemporal variable, $t-R/c$, which corresponds to the amplitude and frequency modulation of the field due to the time dependence of probability amplitudes in quantum state (\ref{eq11}). The stable soliton produces, through Eq. (\re {antplanar}), a radiation pattern in the form of \begin{eqnarray} F(\theta ,\varphi ,\phi _{1},\phi _{2}) &=&\sin {\theta \sum_{p=-N_{1}/2}^{N_{1}/2}\sum_{q=-N_{2}/2}^{N_{2}/2}A_{p,q}B_{p,q}^{\ast }\exp \left\{ {ip}\left[ \frac{\omega \,a}{c}\sin {\theta }\cos {\varphi } +2\phi _{1}}\right] \right\} \nonumber \\ &&\times \exp \left\{ {iq}\left[ \frac{\omega \,a}{c}\sin {\theta }\sin \varphi }+2\phi _{2}\right] \right\} . \label{antplanar11} \end{eqnarray In particular, the radiation pattern produced by the 1D in the 2D space can be written in a simple form by setting $N_{2}=0$ in Eq. (\ref{antplanar11}): \begin{equation} F(\theta ,\varphi ,\phi )=\sin {(\theta )}\sum_{p=-N/2}^{N/2}A_{p}B_{p}^ \ast }\exp \left\{ {ip}\left[ \frac{\omega \,a}{c}\left( {\sin {\theta } \right) \left( {\cos {\varphi }}\right) {+2\phi }\right] \right\} , \label{antplanar2} \end{equation where the notation is changed as $A_{p,q}\rightarrow A_{p},\,B_{p,q}\rightarrow B_{p},\,\phi _{1}\rightarrow \phi ,\,N_{1}\rightarrow N$, to comply with that adopted in Ref. \cite{31}. It is relevant to note that, according to Ref. \cite{43} the radiation pattern is subject to the Onsager kinetic relations, which take into regard symmetry constraints \cite{51}. As it follows from Eqs. (\ref{eq12}) and \ref{rabisol}), $(A_{-p,-q}B_{-p,-q}^{\ast })=(A_{p,q}B_{p,q}^{\ast })$, hence \begin{equation} F(\pi -\theta ,\varphi ,-\phi _{1},-\phi _{2})=F(\theta ,\varphi ,\phi _{1},\phi _{2}), \label{kk} \end{equation while $F(\pi -\theta ,\varphi ,\phi _{1},\phi _{2})\ne F(\theta ,\varphi ,\phi _{1},\phi _{2})$. It shows that, rotating the QD array by $180^ \mathrm{o}}$, one should invert the propagation direction of the driving field to keep the physical state invariant, similar to the the sign of the angular velocity in the rotating liquid, or the sign of the magnetic field \cite{51}. This symmetry agrees with the non-reciprocity of the Rabi waves exhibited by Eq. (\ref{eqp1}). For the cylindrical drive, the radiation pattern in the far-field zone is obtained from Eq. (\ref{eq26}) in a similar way: \begin{equation} F(\theta ,\varphi )=\sin {\theta }\sum_{p=-N_{1}/2}^{N_{1}/2 \sum_{q=-N_{2}/2}^{N_{2}/2}A_{p,q}B_{p,q}^{\ast }|S_{p,q}|\exp \left\{\frac i\omega \, a}{c}\sin {\theta } \left[ p\cos{\varphi} +q\sin {\varphi} \right] \right\} . \label{antcyl1} \end{equation} The useful information in antenna context are the radiation patterns in the E$- and $H$ -planes defined, respectively, as $F_{e}(\theta )\equiv F(\theta ,\varphi =\pi /2)$ and $F_{h}(\theta )\equiv F(\theta =\pi /2,\varphi )$ \cite{32}. Plots of these quantities follow the corresponding dynamically stable soliton evolution in figures below. \section{Results and discussion} \subsection{Rabi solitons driven by the plane wave} After stationary solitons were found as outlined above [see Eq. (\re {rabisol})], their stability against small perturbations was examined in the framework of linearized equations for small perturbations added to the stationary solution, \begin{equation} \Psi _{p,q}(t)=\left[ A_{p,q}+\delta A_{p,q}(t)\right] e^{-i\Omega t},\,\Phi _{p,q}(t)=\left[ B_{p,q}+\delta B_{p,q}(t)\right] e^{-i\Omega t}. \label{perturbations} \end{equation In this case, the linearization of Eq. (\ref{eq12}) with respect to the perturbations $\delta A_{p,q}$ and $\delta B_{p,q}$, leads to a linear eigenvalue (EV) problem for the instability growth rate of small perturbations. Obviously, the soliton is stable if all the eigenvalues have zero real parts. On the other hand, the presence of EVs with a positive real part indicates at exponential or oscillatory instability, in the cases of purely real or complex EVs, respectively. Finally, the dynamical stability of the discrete solitons was verified by dint of direct simulations of Eqs. \ref{eq12}). It is well known that the single-component 2D DNLS equation gives rise to three types of fundamental discrete solitons: stable onsite, and unstable hybrid and inter-site ones \cite{49}, along with several types of vortices with vorticity $S=1$ (eight configurations) and $S=2$ (four configurations) \cite{49,50,vortex}. The vorticity is identified as the winding number of the phase of the lattice field, i.e., the total change of the phase along a closed curve surrounding the pivotal point of the vortex, divided by $2\pi $. We here consider the values of $S=1$ and $S=2$. Obviously, the soliton complexes must be located inside the semi-infinite gap of dispersion relation (\ref{disp}), see Fig. \ref{dispersion}. The most straightforward counterpart of the single-component 2D DNLS equation is the model of the QD array based on Eq. (\ref{eq12}) with zero detuning, $F=0$ and $\phi _{1}=\phi _{2}=0$. In this system, we find soliton complexes formed by two identical real components of the onsite, hybrid, and inter-site types (Fig. \ref{slika1}), and families of discrete vortex solitons with topological charges $S=1$ and $S=2$ (together with quadrupoles), which are supported by the attractive XPM interaction. The localized modes in each component are similar to the corresponding types of fundamental and vortex solitons formed in the single-component 2D DNLS equation. Different types of vortices in each component are schematically shown in tables I and II. Configurations $a$, $c$, $d$, $e$, $f$, $g$, $i$, and $j$ represent possible structures of onsite vortices, with the pivotal point located on a lattice site, while $b$, $h$, $k$, and $l$ generate off-site vortices, whose pivot falls located between lattice sites (the same nomenclature as in Ref. \cite{50}). An example of the vortex of type $a$ is plotted in Fig. \ref{vortex1}. The linear-stability analysis shows that the complexes formed by identical fundamental-soliton onsite components are stable, and that there are stability regions in the respective parameter space for vortex configurations of types $a$, $b$ (ground-state vortices), $e$, $g$ with $S=1$ (excited state vortices), as well as for types $j$, $k$ with $S=2$. The corresponding radiation-field distributions for the onsite fundamental solitons are plotted in Fig. \ref{slika1r}, while the same for the stable vortices of the $a$ and $b$ types is shown in Fig. \ref{vortex1}. The vortices of type $k$ actually feature an offsite quadrupole configuration (rather than a truly vortical one), based on the four-site frame and sharp phase changes in steps of $\pi$ between lattice sites. The stability regions of all the above-mentioned complexes almost overlap with the stability regions of their counterparts in the single-component 2D DNLS equation. Briefly speaking, the vortex-stability range is narrow, particularly for spatially broad vortices, and shrinks with the increase of $\xi $ (which is the counterpart of the inter-site coupling constant $C$ in single-component DNLS equation \cite{50}). This is in accordance with the fact that large \xi $ corresponds to the continuum limit, in which all the localized vortex modes are unstable. We have also checked a possibility to form vortex complexes formed of two identical vortices with opposite signs (phase shift \pi $), but they all turn out to be unstable. The predicted dynamical stability and instability has been confirmed by direct simulations for the onsite fundamental soliton complexes and certain vortex complexes for typical values of the system's parameters. The onsite fundamental complexes and those vortex complexes which were predicted to be stable indeed keep their amplitude and phase structure (i.e., the vorticity) in the course of the evolution. On the other hand, hybrid and offsite complexes, whose instability is predicted by the linear-stability analysis, radiate away a significant part of their norm and rearrange into dynamically robust onsite-centered localized breathers. Direct simulations of vortex complexes, which were predicted to be unstable, demonstrate that their amplitude profiles are quite robust, and they keep the vorticity in the course of the evolution. However, the symmetry with respect to the pivot is broken, and the symmetry between their two components is broken too. The so established mode may be categorized as an irregularly oscillating vortex breather, see Fig. \ref{vortex3}. \begin{figure}[th] \center\includegraphics [width=12cm]{figure2.eps} \caption{Families of fundamental solitons, formed by two identical onsite (solid black line), hybrid (solid red line) and offsite (dotted green line) components, are represented by the corresponding $|\Delta \protect\omega (\Omega )|$ dependencies in (a). Examples of these solitons for $\Omega =-7$ [the vertical blue line in (a) which intersects all three curves] are shown in (b) -- onsite, (c) -- hybrid and (d) -- offsite, respectively. This figure pertains to the model with the plane-wave excitation, see Eq. \protect\ref{eq12}). } \label{slika1} \end{figure} \begin{figure}[h] \center\includegraphics [width=12cm]{figure3.eps} \caption{Amplitude, phase profiles, and radiative patterns in $E$- and $H -planes of vortices with $S=1$ defined as types $a$ and $b$ in Ref. \protect\cite{vortex}. The parameters in Eq. (\protect\ref{eq12}) are \protect\xi =0.01,\,F=0,\,\protect\phi _{1}=\protect\phi _{2}=0$, and \,\Omega =-19.3$. Both are stable according to the linear-stability analysis and dynamical simulations.} \label{vortex1} \end{figure} \begin{figure}[th] \center\includegraphics [width=12cm]{figure4.eps} \caption{The radiative patterns in the $E$- and $H$-directions, $F_{e} \protect\theta )$ and $F_{h}(\protect\phi )$, for the complex consisting of two fundamental onsite solitons with (a) $\Omega =-7,\,F=0,\,\protect\phi _{1}=\protect\phi _{2}=0$ and (b) $\Omega =-7,\,F=0,\,\protect\phi _{1} \protect\phi _{2}=\protect\pi /4$. The plane-wave excitation is considered, see Eq. (\protect\ref{eq12}). } \label{slika1r} \end{figure} \begin{figure}[h] \center\includegraphics [width=12cm]{figure5.eps} \caption{The evolution of one component of an unstable vortex of type $c$ with $S=1$. (a) The amplitude and phase profiles of the stationary vortex with $\Omega =-10$. Panels (b) and (c) show the profiles in the course of the perturbed evolution ($t=50$ and $t=100$ in arbitrary units, respectively). Other parameters are $\protect\xi =0.01,\ F=0,\,\protect\phi _{1}=\protect\phi _{2}=0$. } \label{vortex3} \end{figure} In the 2D array of QDs with nonzero detuning ($F=1$ is chosen below, as a characteristic example), all types of the above-mentioned fundamental and vortical complexes can be generated, but the symmetry between the two components ($A_{p,q}$ and $B_{p,q}$) is not preserved. Simultaneously, the similarity to the single-component DNLS equation is lost. Namely, the overall shape of the components in fields $A_{p,q}$ and $B_{p,q}$ is kept, but amplitudes and widths of the components change, and their precise symmetry about the central point is broken (slightly violated). In this case, the linear-stability analysis does not predict the existence of genuinely stable fundamental complexes, with the exception of the vortex complexes of type $a$, with $S=1$, and of type $k$, with $S=2$, see below. However, dynamical simulations demonstrate that localized modes of the onsite type form robust breathing complexes, which keep the initial overall structure (in particular, the vorticity is kept). On the other hand, the computation of the eigenvalues for small perturbations shows stability of vortex complexes of type $a$, with $S=1$, as well as of types $k$ with $S=2$ in the presence of $F\neq 0$. Dynamical simulations confirm their stability in the corresponding parameter regimes. Other families of vortex complexes formed at $F\neq 0$ are unstable according to both the linear-stability analysis and dynamical simulations. In fact, the instability destroys their phase structure, while the amplitude patterns remain quite robust. Thus, the general conclusion concerning the system (\ref{eq12}) driven by the plane wave \ is that, in the absence of the phase shifts, $\phi _{1}=\phi _{2}=0$, the the stability of the complexes is similar to what is known about their counterparts in the single 2D DNLS equation single 2D lattice case. Namely, irrespective of the presence of the detuning, $F$, in Eq. (\ref{eq12}), the those modes which were stable in the single-component model, remain completely or effectively stable in the two-component one, evolving in some cases from the stationary shapes into breathing ones, but keeping the overall structure. Considering the system with non-zero phase shifts, $\phi _{1}$ and $\phi _{2} $ in Eq. (\ref{eq12}), we mainly focus on two configurations, \textit viz}., the diagonal and anisotropic ones, with $\phi _{1}=\phi _{2}=\pi /4$ or $\phi _{1}=0,\,\phi _{2}=\sqrt{2}\pi /4$, respectively. In both these cases, we have found complexes of all the above-mentioned types. Namely, fundamental solitons of the onsite, hybrid and inter-site types, with symmetric components, in the case of $F=0$ (see Fig. \ref{slika4}), and their asymmetric counterparts at $F\neq 0$. The existence boundary for the fundamental soliton complexes is slightly altered by the nonvanishing phase shifts, as can be seen from small changes of the respective dispersion surfaces displayed in Fig. \ref{dispersion}. Example of the soliton radiation fields for a stable configuration is shown in Fig. \ref{slika1r (b). Counterparts of all the vortex field configurations, which are identified above in the system with $\phi _{1,2}=0$, are also found in the system with \phi _{1,2}\neq 0$. Differences are observed in the boundary of the existence region, and parameter regions where robust vortex breathers appear. In Fig. \ref{vortex4}, the evolution of the symmetric vortex soliton of type $a$ is displayed for $\phi _{1}=\phi _{2}=\pi /4,\,F=0$. According to the computation of perturbation eigenvalues, stable complexes of two $a$ type vortices (with topological charge $S=1$) keep their stability at $\phi _{1}=\phi _{2}\neq 0$, while unequal phase shifts destabilize the complexes. The radiation pattern keeps its shape, as shown in Fig. \ref{vortex4}(c). Most robust are complexes formed by the quadrupoles of type $k$ with $S=2$, in the sense that they are stable in a certain parameter area for all relations between phases $\phi _{1}$ and $\phi _{2}$. Lastly, the phase differences always destabilize other types of vortex complexes. In general, the value of the corresponding instability rate, which is proportional to a purely real eigenvalue, increases with increasing $\phi _{1},\,\phi _{2}$ and fixed $F$. Direct simulations confirm that only complexes formed by the onsite fundamental solitons, and certain types of vortices are stable in particular parameter ranges, chiefly in the form of robust localized breathing patterns. Similar to what is mentioned above, the instability destroys the phase structure of vortices, without essentially affecting the amplitude pattern. \begin{figure}[h] \center\includegraphics [width=10cm]{figure6.eps} \caption{ Examples of fundamental-soliton complexes [in the model with the plane-wave excitation, Eq. (\protect\ref{eq12})], formed by identical components, at $F=0$, $\Omega =-7$, and $\protect\phi _{1}=\protect\phi _{2} \protect\pi /4$. Plotted are real and imaginary parts of the stationary lattice fields for the complexes of the onsite (a), hybrid (b), and intersite (c) types, respectively. } \label{slika4} \end{figure} \begin{figure}[h] \center\includegraphics [width=12cm]{figure7.eps} \caption{Initial (a) amplitude and phase profiles of the vortex of type $a$, in terms of Ref. \protect\cite{vortex}, the result of its evolution at t=100 $ (b), and the respective radiative patterns $F_{e}$ and $F_{h}$ (c). The parameters in Eq. (\protect\ref{eq12}) are $\protect\xi =0.01,\,F=0,\ \protect\phi _{1}=\protect\phi _{2}=\protect\pi /4$, and $\,\Omega =-18$.} \label{vortex4} \end{figure} \begin{table}[tbp] \caption{Schemes of the amplitude and phase profiles corresponding to the two identical components of the discrete vortex with $S=1$ in the system with $F=0,\protect\phi _{1}=\protect\phi _{2}=0$ [the model with the plane-wave excitation, Eq. (\protect\ref{eq12})]. Explicitly written shares of $\protect\pi $ (or $0$) are values of the phase at main sites carrying the vortex complex. Symbol \textrm{x} designates sites with zero amplitude. Only complexes with the components of the $b$ and $l$ types for $S=1$, and of the $k$ and $h$ types for $S=2$, respectively, are found in the model with the cylindrical-wave excitation. } \label{tabela1 \par \begin{tabular}{|l|l|l|l|} \hline Type & $S$ & Phase configuration & Stable (plane wave/ cylindrical wave) \\ \hline \ \ \ \ $a$ & \ \ 1 & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ & \ \ \ \ Yes/No \\ & & \ \ \ \ \ \ \ \ $\pi /2$\ \ x\ \ $3\pi /2$ & \\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\pi $\ & \\ \hline \ \ \ \ $b$ & \ \ 1 & \ \ \ \ \ \ \ \ \ \ \ \ 0 \ \ $3\pi /2$ & \ \ \ \ Yes/No \\ & & \ \ \ \ \ \ \ \ \ \ \ \ $\pi /2$ \ \ $\pi $ & \\ \hline \ \ \ \ $c$ & \ \ 1 & \ \ \ \ \ \ \ \ $\pi /4$ \ \ \ 0 \ \ $7\pi /4$ & \ \ \ \ No/No \\ & & \ \ \ \ \ \ \ \ $\pi /2$ \ \ \ x \ \ $3\pi /2$ & \\ & & \ \ \ \ \ \ \ $3\pi /4$ \ \ $\pi $ \ \ $5\pi /4$ & \\ \hline \ \ \ \ $d$ & \ \ 1 & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 & \ \ \ \ No/No \\ & & \ \ \ \ \ \ \ \ $\pi /4$ \ \ x \ $7\pi /4$ & \\ & & \ $\pi /2$ \ \ x \ \ \ \ x \ \ \ \ x \ \ $3\pi /2$ & \\ & & \ \ \ \ \ \ \ $3\pi /4$ \ x \ \ $5\pi /4$ & \\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\pi $ & \\ \hline \ \ \ \ $e$ & \ \ 1 & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 & \ \ \ \ Yes/No \\ & & \ \ \ \ \ \ \ \ $\pi /4$ \ \ x \ $7\pi /4$ & \\ & & \ $\pi /2$ \ \ x \ \ \ \ 0 \ \ \ \ x \ \ $3\pi /2$ & \\ & & \ \ \ \ \ \ $3\pi /4$ \ \ x \ $5\pi /4$ & \\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\pi $ & \\ \hline \ \ \ \ $f$ & \ \ 1 & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 & \ \ \ \ No/No \\ & & \ \ \ \ \ \ \ \ $\pi /4$ \ \ 0 \ \ $7\pi /4$ & \\ & & $\pi /2$ \ \ $\pi /2$ \ \ x \ \ $3\pi /2$ \ $3\pi /2$ & \\ & & \ \ \ \ \ \ $3\pi /4$ \ \ $\pi $ \ \ $5\pi /4$ & \\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\pi $ & \\ \hline \ \ \ \ $g$ & \ \ 1 & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 & \ \ \ \ Yes/No \\ & & \ \ \ \ \ \ \ \ $\pi /4$ \ $\pi $ \ $7\pi /4$ & \\ & & $\pi /2$ \ $3\pi /2$ \ x \ \ $\pi /2$ \ $3\pi /2$ & \\ & & \ \ \ \ \ \ \ $3\pi /4$ \ 0 \ $5\pi /4$ & \\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\pi $ & \\ \hline \ \ \ \ $h$ & \ \ 1 & \ \ \ \ \ \ \ \ \ \ \ 0 \ \ \ $7\pi /4$ & \ \ \ \ No/No \\ & & \ \ $\pi /4$ \ \ \ 0 \ \ \ $3\pi /2$ \ $3\pi /2$ & \\ & & \ \ $\pi /2$ \ \ $\pi /2$ \ \ \ $\pi $ \ \ \ $5\pi /4$ & \\ & & \ \ \ \ \ \ \ \ $3\pi /4$ \ \ \ $\pi $ & \\ \hline \end{tabular \end{table} \begin{table}[tbp] \caption{Schemes of the amplitude and phase profiles corresponding to vortex solitons with $S=2$.} \label{tabela2 \par \begin{tabular}{|l|l|l|l|} \hline Type & $S$ & \ \ Phase configuration & Stable (plane wave/cylindrical wave) \\ \hline \ \ \ \ $i$ & \ \ 2 & \ \ \ \ \ \ \ \ $\pi /2$ \ \ \ 0 \ \ $3\pi /2$ & \ \ \ \ No/No \\ & & \ \ \ \ \ \ \ \ \ $\pi $ \ \ \ \ \ x \ \ \ \ $\pi $ & \\ & & \ \ \ \ \ \ \ $3\pi /2$\ \ \ \ 0 \ \ \ $\pi /2$ & \\ \hline \ \ \ \ $j$ & \ \ 2 & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 & \ \ \ \ Yes/No \\ & & \ \ \ \ \ \ \ $\pi /2$\ \ \ \ x\ \ \ \ $3\pi /2$ & \\ & & \ $\pi $\ \ \ \ \ \ x\ \ \ \ \ \ x\ \ \ \ \ \ x \ \ \ \ \ \ $\pi $\ & \\ & & \ \ \ \ \ \ $3\pi /2$\ \ \ \ x\ \ \ \ \ $\pi /2$ & \\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 & \\ \hline \ \ \ \ $k$ & \ \ 2 & \ \ \ \ \ \ \ \ \ \ \ \ \ 0 \ \ $\pi $ & \ \ \ \ Yes/Yes \\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ $\pi $ \ \ 0 & \\ \hline \ \ \ \ $l$ & \ \ 2 & \ \ \ \ \ \ \ \ \ \ \ \ 0 \ \ $3\pi /2$ & \ \ \ \ No/No \\ & & \ \ \ \ \ $\pi /2$ \ 0 \ \ \ \ $\pi $ \ \ \ \ $\pi $ & \\ & & \ \ \ \ \ \ $\pi $ \ \ \ $\pi $ \ \ \ \ 0 \ \ $\pi /2$ & \\ & & \ \ \ \ \ \ \ \ \ \ $3\pi /2$ \ 0 & \\ \hline \end{tabular \end{table} Thus, we conclude that the changes in values of phase shifts $\phi _{1,2}$ in Eq. (\ref{eq12}), which are introduced by the oblique incident plane waves, as well as the detuning, $F$, do not cause qualitative changes in the structure and effective stability of the 2D self-trapped fundamental modes and particular types of vortex complexes, of type $a$ with $S=1$, and type k $ with $S=2$ (the quadrupole). Finally, systematic simulations demonstrate that moving discrete-soliton complexes cannot be found in the present 2D model. \subsection{Rabi solitons driven by the cylindrical wave} For the 2D QD array with the cylindrical-wave excitation, described by Eq. \ref{eq18}), we report here results obtained for $ka=\pi /4$. As we shown by detailed analysis, these results adequately represent the generic situation. Similar to the planar-wave model, soliton complexes are located in the semi-infinite gap in the linear spectrum. However, unlike Eq. (\ref{disp}), the spectrum with the cylindrical driving wave cannot be found analytically, therefore it was calculated numerically. Two-component fundamental solitons are built, as above, of two components, each of them being of the onsite, inter-site, or hybrid type. Vortex complexes found in the present system are displayed in Fig. \ref{cyl3}. These are complexes of $b$ and $l$ types with S=1$, or of $k$ and $h$ types with $S=2$, see Tables I and II. In the system with a nonzero detuning parameter $F\neq 0$, all types of the above-mentioned fundamental and vortical complexes are generated, but the symmetry between the two components is not preserved. \begin{figure}[h] \center\includegraphics [width=12cm]{figure8.eps} \caption{The amplitude (a) and phase (b) patterns of the vortex in each component of the complex of the $k$ type with $S=2$, excited by cylindrical wave.} \label{cyl3} \end{figure} The linear-stability analysis shows that the complexes formed identical fundamental-soliton onsite components are stable for all value of detuning $F$, and there are stability regions in the respective parameter space for vortex configuration of type $k$ with $S=2$ at $F=0$. The stability region of latter mode almost overlaps with its counterpart for the corresponding vortex complex in the 2D system with the plane-wave excitation. Radiation patterns of stable solitary Rabi structures excited by the cylindrical wave are plotted in Fig. \ref{cyl2r}. \begin{figure}[h] \center\includegraphics [width=12cm]{figure9.eps} \caption{The evolution of the amplitude pattern of the onsite soliton complex with $\protect\mu =-17,\,\Delta \protect\omega =-30,\,F=1$ in the model with the cylindrical-wave excitation, see Eq. (\protect\ref{eq18}). Plots correspond to $t=0,\, 50, \, 100$ presented in arbitrary units.} \label{cyl2} \end{figure} \begin{figure}[h] \center\includegraphics [width=12cm]{figure10.eps} \caption{Radiation patterns $F_{e}$ and $F_{h}$ for (a) the onsite complex with $\protect\mu =-17,\,\Delta \protect\omega =-30,\,F=1$, and (b) the vortical complex of the $k$ type with $F=0,\protect\mu =17.2,S=2$, in the model with the cylindrical-wave excitation, Eq. (\protect\ref{eq18}). } \label{cyl2r} \end{figure} \begin{figure}[h] \center\includegraphics [width=12cm]{figure11.eps} \caption{The evolution of the stable vortex complex of the $k$ type, excited by the cylindrical wave, is illustrated by amplitude and phase patterns, shown at two moments of time. Parameters are $F=0,\protect\mu =17.2,S=2$. Plots correspond to $t=0$ and $50$ in arbitrary units.} \label{cyl4} \end{figure} The predicted dynamical stability has been confirmed by direct simulations for the onsite fundamental soliton complexes (Fig. \ref{cyl2}) and vortex complexes for particular values of the system parameters. Similar to what was seen above, stable vortices preserve their amplitude and phase patterns, as shown in Fig. \ref{cyl4}. In general, effectively stable localized patterns (including those which are unstable in terms of the eigenvalues but exhibit persistence in direct simulations) evolve as breathing modes. In contrast, the unstable hybrid and inter-site complexes radiate into background a significant part of their energy. The remaining energy can be reorganized as a new localized onsite breathing structure, see Fig. \ref{cyl5}. On the other hand, unstable vortex complexes are more robust structures with significantly reduced radiation of energy to the background. The newly formed breathing structures preserve localization but phase coherence is destroyed. The symmetry of the profile with respect to the center of vortex is broken, as well as the symmetry between the components, see Fig. \ref{cyl6}. \begin{figure}[h] \center\includegraphics [width=12cm]{figure12.eps} \caption{The perturbed evolution of the amplitude pattern of an offsite soliton, for $\protect\mu =-17,\,\Delta \protect\omega =-110,\,F=1$. The cylindrical-wave excitation is used, see Eqw. (\protect\ref{eq18}). Plots correspond to $t=0,\, 50, \, 100$ in arbitrary units. } \label{cyl5} \end{figure} \begin{figure}[h] \center\includegraphics [width=12cm]{figure13.eps} \caption{The evolution of an unstable vortex of the $l$ type in the model with the cylindrical-wave excitation. Parameters are $F=0,\protect\mu =17.2,S=1$. Plots correspond to $t=0$ and $t=100$ in arbitrary units.} \label{cyl6} \end{figure} In the physical system underlying the present lattice model, parameters may be known only with a finite accuracy, which makes it necessary to test the stability of the localized modes with respect to random variations of coefficients of Eqs. (\ref{eq12}) and (\ref{eq18}). We have performed this test by introducing random changes, of relative amplitude $10\%$, of coefficients $\xi $ and $F$, which represent uncertainties in QD energy levels, and tunneling matrix elements that determine the strength of the inter-site coupling. The result (not shown here in detail) is that all the localized modes which were found above to be dynamically stable are also structurally stable against the random variations of the system's parameters. In fact, the robustness of discrete solitons against random deformations of the underlying lattices is known in earlier studied models \cite{1}. The bottom line of this subsection is that in the system with the cylindrical-wave excitation, fundamental onsite soliton complexes and vortical ones of the $k$ type with $S=2$ are stable in certain areas of their existence region. In addition, we point out that the soliton-produced radiation patterns may be used as digital signals for data coding in the framework of the quantum information transition. Due to the different patterns of these signals for different types of the models, driven by the plane and cylindrical waves, they may be identified clearly. \subsection{The concept of solitonic nano-antennas} As said above, the transmitting antenna is defined as a device which converts the near field into the far field \cite{32}. On the other hand, the Rabi solitons feature the spatially distributed polarization induced by the RO, as seen in Eqs. (\ref{eq23}) and (\ref{eq25}). The 2D electromagnetic radiation patterns induced by these polarization profile suggest to consider the present system in the context of the realization of transmitting antennas. Thus, the Rabi soliton offers a previously unexplored physical mechanism to emulate the antennas, while the driving field plays the role of the external energy source launching and driving this device. For the data transfer, the emitted field should be temporally modulated by an input signal through the driving electromagnetic field. Actually, this modulation is relatively slow, hence it may be considered as an adiabatic process. As mentioned above, a basic characteristic of the antenna is the 2D radiation pattern, or, in some cases, partial patterns in the $E$- and $H -planes \cite{32}. The necessity to consider the radiation profiles in 2D makes the present analysis principally different from the study of RO solitons in 1D arrays, which was recently reported in Ref. \cite{31}. Radiation pattern (\ref{antplanar11}) produced by the QD array can be rewritten in terms of the standard notation adopted in the antenna theory \cite{32} as $F(\theta ,\varphi )=\sin {(\theta )}\cdot (AF)_{N_{1}\times N_{2}}$. The first factor, $\sin {(\theta )}$, is the radiation pattern of a single emitter (in our case, the electric dipole), while the second one is the array factor, \begin{eqnarray} (AF)_{N_{1}\times N_{2}} =\sum_{p=-N_{1}/2}^{N_{1}/2}\sum_{q=-N_{2}/2}^{N_{2}/2}A_{p,q}B_{p,q}^{\ast }\exp \left\{ {ip}\left[ {c}^{-1}{\omega a\,}\left( {\sin {\theta }}\right) \left( {\cos {\varphi }}\right) {+2\phi _{1}}\right] \right\} \nonumber \\ \times \exp \left\{ {iq}\left[ {c}^{-1}\omega {a\,}\left( {\sin {\theta }} \right) \left( {\sin {\varphi }}\right) {+2\phi _{2}}\right] \right\} . \label{kll} \end{eqnarray In the macroscopic phased antenna array, elements are excited by a signal having a constant amplitude and the phase which uniformly increases along the array's axis \cite{32}. The array factor for that case corresponds to Eq. (\ref{kll}) with $A_{p,q}B_{p,q}^{\ast }=1$. The phase variation allows one to perform scanning of the space by turning off the main lobe of the radiation pattern. One of the most efficient means to implement the antenna control is the use of switched-beam systems which can impose different angular patterns, in order to enhance the emitted signal in a preferable direction. The beam-forming algorithm is implemented through a complex pattern of the excitation of individual elements [which corresponds to factor $S_{pq}=A_{p,q}B_{p,q}^{\ast }$ in notation (\ref{kll})], adjusted so as to place the maximum of the main beam in the necessary direction \cite{32 . The purport of the solitonic nano-antenna concept is to introduce the beam-forming algorithm, which is determined by the set of factors $S_{pq}$, using the soliton-emission profile. As several different stable solitons may exist in the given array, the choice of the overall profile depends on initial conditions. This implies that a finite number of predefined array factors exist for a given set of antenna parameters, while the initial conditions give a facility for choosing and switching the suitable one. To the best of our knowledge, this feature has no analogs in previously developed types of nano-antennas, and it seems quite promising from the practical point of view. For the stable solitons built of two identical components, $A$ and $B$ in terms of Eq. (\ref{eq21}), the array-factor pattern is independent on the phase profile of the soliton. Partial radiation patterns in the $E$- and $H -planes for different types of the solitons and different types of the excitation are displayed in Figs. \ref{vortex1}, \ref{slika1r},\ref{vortex4 , and \ref{cyl2r}. A basic problem that one should take care of to implement emitting nano-arrays in the real physical setting is maintaining the coherence of individual emitters in the array over sufficiently long time. The destruction of the coherence is contributed to by physical mechanisms of dephasing and relaxation, as well as by material imperfections of the underlying array. Recent experimental results clearly demonstrate that these problems may be resolved, allowing quite large QD arrays to maintain the coherent behavior even at room temperatures \cite{extra1,extra2}. There are two types of QDs that are commonly employed in nanophotonics \cit {extra3}. The first one is built of semiconductor nanocrystals (typically II-IV compounds) embedded in a glass matrix. In Ref. \cite{extra1}, the QD ensemble of the CdSe/ZnS core/shell nanocrystals, capped with octade-cylamine, has been produced. It was fabricated as a 2D inhomogeneously broadened close-packed network of $50\times 50$ sites, with the average diameter and interdot distance $5.2$ and $7.9$nm, respectively. There also appeared dark QDs associated with defects in the ensemble. The qualitative picture of the exciton dynamics in the QD array observed in Ref. \cite{extra1}, based on direct measurements, shows that when an exciton is photoexcited in a high-energy QD, energy transfer occurs preferentially to a low-energy QD. At low temperatures, the exciton is trapped at a local low-energy site to which the energy was transferred. In contrast to that, at room temperature the exciton hops repeatedly, until it is transferred to a dark QD, where it undergoes nonradiative recombination. When an exciton is photoexcited in a low-energy QD, it tends to be trapped in it. However, at room temperature, there is a non-negligible probability for the exciton to be transferred from a lower-energy QD to a higher-energy one. In Ref. \cit {extra1}, time- and spectrally resolved fluorescence intensities were measured by means of site-selective spectroscopy at both room and low ($80$ K) temperatures. The respective inverse radiative decay rate is found to be 15 $ ns, which makes such type of arrays promising for the implementation of nano-antennas. The second type of QDs is the self-organized structures produced by the epitaxial crystal growth in the Stranski-Krastanov regime \cite{extra3}. The self-assembled QD array was grown in Ref. \cite{extra2} with the help of the molecular-beam epitaxy on a semiconducting ($100$)-oriented GaAs background, with a $500$ nm N$^{+}$-GaAs buffer layer. The respective lens-shaped In _{0.65}$Al$_{0.35}$As quantum dot is a part of a sphere with a fixed height of $3.4$ nm and base diameter of $38$ nm. The photoluminescence spectrum of the QD array is time-resolved at the excitation density of $1065$ W/cm$^{2} , at $77$ K. The corresponding exciton lifetime, found from the measurements, is $800-1200$ ps, which complies with the respective value for the arrays of the first type \cite{extra1}. Note that, for the second type, the growth of the top layer allows the cavities to be formed and the electrical contacts to be applied, which makes it especially suitable for the implementation of nano-antennas. Lastly, it is relevant to mention that the soliton mechanism of the self-trapping of the antenna helps to mitigate detrimental effects of imperfections of the underlying QD lattice, as discrete solitons are well known to be robust against deformations of the lattice \cite{1}. Moreover, the fact that the phase structure of topologically organized solitons, such as discrete vortices \cite{vortex}, is stable in imperfect lattices, also indicates that the so constructed nano-antennas tend to stabilize themselves against dephasing. It is relevant to note that the self-organized lattice built of semiconductor QDs not a unique structure allowing the implementation of soliton-based nano-antennas, and the RO is not a unique enabling mechanism for this. Another promising way for achieving this purpose is suggested by theoretical analysis of discrete dissipative plasmon solitons in an array of graphene QDs \cite{2new}.The single QD in such an array represents a doped graphene nanodisk placed on top of the plane background. The single-dot excitation represents a confined surface plasmon with the resonant frequency in the THz or infrared range. The QDs in the array are coupled by long-range dipole-dipole interactions. As it was demonstrated in \cite{2new}, the soliton formation and its stability take place under the control of an incident driving electromagnetic wave via the Kerr optical nonlinearity. In spite of the different physical origins, both the surface plasmon solitons predicted in \cite{2new} and Rabi solitons considered above exhibit strongly confined one- or two-peak areas of electrical polarization. Such peaks may be symmetric or asymmetric, while the spatial structure of the confinement area is tunable via the incidence angle of the oblique driving field. Thus, the above-mentioned graphene QD-array makes it possible to design an electrically controlled nano-antenna for THz and infrared frequency ranges. The radiation pattern (or array factor) for this antenna is given by Eq. \ref{kll}) with necessary modifications (the slowly varying amplitudes of orthogonally directed dipole moments can be found as a solutions of the nonlinear coupled equations derived in \cite{2new}). The qualitative shape of such a radiation pattern is similar to that of the Rabi-soliton-based nano-antenna proposed above. \section{Conclusions} The objective of this work is to introduce the concept of the tunable nano-antenna array based on the discrete-soliton patterns formed in the 2D nonlinear lattices of semiconductor QDs (quantum dots). These lattices can be realized as square-shaped arrays of identical two-level quantum oscillators (the self-organized semiconductor QDs), coupled to nearest neighbors by the electron-hope tunneling and interacting with the external electromagnetic field. The local-field corrections, which account for the difference between the field inside the QD and the external field, induce the nonlinearity of the electron-hole motion inside each QD. The main conclusions of our study are summarized as follows. (i) The model of RO (Rabi oscillations) in the 2D QD lattice has been derived, taking the inter-dot tunnelling and local-field correction into account. The model is based on a set of linearly and nonlinearity coupled DNLS equations for probability amplitudes of the ground and first excited states of two-level oscillators (QDs). Two different driving electromagnetic fields were considered, \textit{viz}., the plane-wave and cylindrical ones. In the former case, the coupling coefficients are complex, with absolute values independent of the QD position, and phases linearly increasing across the QD array in both directions. For the cylindrical-wave drive, the absolute values of the coupling coefficients depend on the distance between the given QD and the source of the driving field. The corresponding phases depend nonlinearly on the distance. (ii) Stable discrete-soliton complexes are found. They are built of onsite fundamental single-peaked solitons, taken in both components, or discrete vortex solitons of certain types. (iii) The emission properties of stable solitary modes have been characterized by angular radiation patterns. These patterns strongly depend on the type of the discrete localized mode. (iv) The concept of self-assembling nano-antennas, based on the stable discrete-soliton complexes in the nonlinear lattices, is introduced. The necessary type of the localized mode may be selected by the initial conditions, which, in turn, can be controlled by the external optical field (to be supplied in the form of a strong laser pulse). As a consequence, a finite number of predefined radiation patterns can be provided by a given antenna. This way of the operational control of nano-antennas has no analogs in previously developed antenna schemes. The stability of the self-trapped nano-antennas against structural imperfections and intrinsic dephasing has been considered. Thus, the system proposed here can be related to switched-beam systems used in macroscopic antennas \cite{32}, which is a promising setting for applications to nanoelectronics and nanooptics. (v) The concept of soliton-excited nano-antennas may be carried over to the surface-plasmon mechanism of the soliton formation in the array of graphene QDs with the Kerr nonlinearity predicted in \cite{2new}. This type of the nano-antennas is promising for applications in the THz and infrared frequency ranges. \section*{ Acknowledgements} G.G., A.M., and Lj.H. acknowledge support from the Ministry of Education and Science of Serbia (Project III45010). G.S. acknowledges support from the EU FP7 projects FP7 People 2009 IRSES 247007 CACOMEL and FP7 People 2013 IRSES 612285 CANTOR. \section*{References}
{ "timestamp": "2015-04-15T02:07:18", "yymm": "1504", "arxiv_id": "1504.03484", "language": "en", "url": "https://arxiv.org/abs/1504.03484" }
\section{Introduction} \label{Intro} Solid state refrigeration based on caloric effects is currently a very active research topic because of the possibility of developing new friendly alternative refrigeration devices \cite{Sandeman2012}. Caloric effects originate from the thermal response of every thermodynamic system to changes induced by the variation (either application or removal) of an external field \cite{Manosa2013}. Depending on the external field, the corresponding caloric effect is called magnetocaloric (magnetic field) \cite{Krenke,sandeman1,Oliveira2010,Planes2009,Planes2014}, barocaloric (hydrostatic pressure)\cite{Manosa2010,Manosa2011,Oliveira2011,Stern2014}, electrocaloric (electric field)\cite{Neese2008,Lu2011,Moya2013,Lisenkov2013}, elastocaloric (mechanical stress)\cite{Bonnot2008,Xiao2013,Nikitin1992}, and toroidocaloric (toroidic field)\cite{Castan2012}. The two limiting situations correspond to either varying the external field isothermically or adiabatically. In the first case a change in the entropy is induced while in the second the system responds with a temperature shift. These isothermal change of entropy and adiabatic change of temperature are commonly used in order to quantify the caloric response of a given system. The interest is to be able to induce a large caloric effect in response to small or moderate variations of the external field. Indeed, this is most likely to occur in the vicinity of a phase transition\cite{Manosa2010}. Moreover, systems with coupled degrees of freedom might respond to different species of external fields. This gives rise to the so called field-tune caloric effect and multicaloric effect \cite{Fahler2011,Vopson2012,Meng2013,Moya2014}. In the first situation, the secondary field is kept constant during the variation of the primary field. While the primary field effectively drives the caloric response, the secondary field allows to adjust the best operative conditions. In the second situation, the multicaloric effect refers to the variation of two or more fields either simultaneously or sequentially. For instance, in the case of systems with magnetoelastic coupling, the interplay between magnetism and elastic properties allows to induce a caloric response in the system by the application of either a magnetic field or/and a mechanical field (hydrostatic pressure or stress). In the present investigation we shall focus in magnetovolumic systems. Magnetovolumic effects arise as a special case of magnetoelastic coupling in which variations in the magnetization are accompanied by an isotropic change in volume. Such variations may be spontaneous, through a phase transition, or forced by the application of an external field. The interaction between volume and magnetism results in the interrelation between magneto- and baro- caloric effects observed experimentally in different materials \cite{Fujita12003,Fujita22003,Lyubina2008,Annaorazov1996,Annaorazov2002}. The present theoretical study is based on a mean-field Ising model\cite{Ising} for phase transitions extended to include coupling between volume and magnetism. Interestingly, the model allows to study two different situations. In the first one, the magnetovolumic coupling induces a first-order para-ferromagnetic phase transition that can be modified by the application of either a hydrostatic pressure or/and a magnetic field. In the second situation, the interplay between volume and magnetism originates a strong first-order antiferro-ferromagnetic transition that is responsive to the application of both hydrostatic pressure and magnetic field. Effective and mean-field approaches\cite{trigero1,Bean1962,Yamada2001,Ranke2004,Ranke2006,Ranke2009,Valiev2009,menyuk1} have been used previously to investigate magnetovolumic effects. Compared with these prior investigations, the present work incorporates the occurrence of a metamagnetic transition and the study of caloric and cross-caloric effects. The paper is organized as follows. In section \ref{Model} we briefly resume the main aspects of the model and the thermodynamics of caloric effects. In section \ref{Ferro} and \ref{Inversion} we solve numerically the model with special attention to the metamagnetic transition (section IV). We first obtain the phase diagram and study how the different transition temperatures change with applied fields (either hydrostatic pressure and/or magnetic field) and next we present the results for both the baro- and magneto- caloric effects. In section \ref{Discussion} we compare our results with experimental data available for magnetic and metamagnetic materials. We finally outline our main conclusions in section \ref{conclusions}. \section{Modeling and thermodynamics of caloric effects} \label{Model} The model under consideration is based on the statistico-mechanical mean-field Ising model extended to include magnetovolumic effects. The starting point is a free-energy, consisting of the sum of two contributions, $f=f_M+f_C$. The first contribution, $f_M$, that accounts for the magnetic degrees of freedom, can be expressed in terms of both the ferromagnetic ($m$) and the antiferromagnetic ($x$) order parameters simultaneously \cite{Ising} \begin{equation} \begin{split} & f_{M} (T,m,x) = -\frac{Jz}{2}(m^2-x^2) - k_B T \ln2+ \frac{k_B T}{4} [ (1+m+x) \ln (1+m+x) + \\ & + (1+m-x) \ln (1+m-x) +(1-m+x) \ln (1-m+x) + \\ & + (1-m-x) \ln (1-m-x) ], \label{EQ1} \end{split} \end{equation} Hereafter the exchange interaction is fixed to be positive ($J>0$). In that case, the previous free energy (\ref{EQ1}) produces a continuous para-ferromagnetic phase transition at $T_c=zJ/k_B$, being $z$ the number of nearest neighbours and $k_{B}$ the Botzmann constant. The second contribution, $f_C$, incorporates the magnetovolumic coupling and includes magnetostriction coupling of both order parameters, $m$ and $x$, to the relative volume change $w=\frac{\delta \Omega}{\Omega}$, where $\Omega$ is some reference volume. Restricting the coupling terms to the minimum order allowed by symmetry, one may write: \begin{equation} f_C(m,x,w)= \frac{\alpha_0}{2}w^2 - (\alpha_1 m^2 + \alpha_2 x^2)\frac{w}{2}. \label{EQ2} \end{equation} We have also included a purely elastic contribution, with $\alpha_0$ being proportional to the inverse of the compressibility. Furthermore, in order to account for pressure effects as well as for the interplay with an external magnetic field, we introduce the following Legendre transform to the total free-energy: \begin{equation} g= f(T,m,x,w)-Hm+ \Omega P w, \label{EQ3} \end{equation} where $\mathnormal {g}$ stands for the Gibbs free-energy, $P$ is the hydrostatic pressure and $H$ is the external magnetic field. In expression (\ref{EQ2}) $\alpha_1$ is the magnetostriction coefficient that gives rise to a first-order phase transition from a paramagnetic ($\cal{P}$) phase to a ferromagnetic ($\cal{F}$) phase when lowering the temperature. The coefficient $\alpha_2$ causes an inversion-exchange of the effective interaction so that an antiferromagnetic ($\cal{AF}$) order might exist for some range of model parameters and applied external fields. We remark that the Landau-based phenomenological expansion in eq. (\ref{EQ2}) is based on symmetry considerations and that it intends to describe the effects of the interplay between volume and magnetism rather than to address its physical origin. The physical mechanism that originates such interplay and the way it operates can be different from one system to another. Nevertheless, the symmetry-based coupling in (\ref{EQ2}) is present in all magnetic materials although in some cases can be negligible. Moreover, coupling coefficients are material dependent and can be functions of chemical composition and valence electron concentration, among others. It is worth mentioning that the linear-quadratic coupling between volume change and magnetization has been used previously through a prescribed linear dependence of the Curie temperature with the volume change\cite{Bean1962}. It is convenient to get rid of the (secondary) order parameter $w$ by minimizing expression (\ref{EQ3}) with respect to $w$. One gets: \begin{equation} w=\frac{1}{2\alpha_0} \left [(\alpha_1 m^2 + \alpha_2 x^2)- 2P \Omega \right ] \label{EQ4} \end{equation} This constitutive equation verifies the following Maxwell relation \cite{Planes20142,Vopson2012}, \begin{equation} \left (\frac{\partial m}{\partial P} \right )_{T,H} = - \Omega \left (\frac{\partial \omega }{\partial H} \right )_{T,P}, \label{EQ7bis} \end{equation} that underlines the origin of the multicaloric response. Therefore, the Gibbs free-energy per magnetic particle, in reduced units, along the optimum path involving $m$, $x$, and $w$ given by (\ref{EQ4}), is: \begin{equation} \begin{split} & g^* = \frac{g}{zJ}= -\frac{1}{2}(m^2-x^2) - T^* \ln2 + \frac{T^*}{4} [ (1+m+x) \ln (1+m+x) + \\ & + (1+m-x) \ln (1+m-x) +(1-m+x) \ln (1-m+x) + \\ & + (1-m-x) \ln (1-m-x) ]- \frac{1}{8 \alpha_0^*}[(\alpha_1^* m^2+\alpha_2^* x^2)- 2P \Omega^*]^2- H^* m. \label{EQ5} \end{split} \end{equation} Where the superscript ($*$) indicates that the magnitude is normalized to $zJ$. We take $\alpha_0^*=1$, $\Omega^*=1$, without loss of generality. When a given external field ($Y$) is modified (applied/removed) isothermally, the corresponding caloric effect is related to the entropy change of the system that can be obtained from fundamental Thermodynamics \cite{Planes2009,Planes20142}. Indeed, for a finite change of the field ($Y=0 \,\rightarrow \,Y \neq 0$), the corresponding field-induced isothermal entropy change will be given by: \begin{equation} \Delta S(T,0 \rightarrow Y) = S(T,Y)-S(T,0) =\int_0^Y \left( \frac{\partial S}{\partial Y} \right )_T dY = \int_0^Y \left( \frac{\partial X}{\partial T} \right )_Y dY, \label{EQ6} \end {equation} where we have used the appropriate Maxwell relation and $X$ is the thermodynamically conjugated variable to the field $Y$. The present model can be applied to study both magnetocaloric (MCE) and barocaloric (BCE) effects corresponding to ($Y=H$, $X=m$) and ($Y=-P$, $X=w$) respectively. Indeed, the entropy can be directly obtained from (\ref{EQ5}) by taking into account that, \begin{equation} \begin{split} & S(m,x) = - \left [\frac{\partial g^{*}}{\partial T^{*}} \right ]_{H,P} = \\ & \ln 2 - \frac{1}{4} [ (1+m+x) \ln (1+m+x) + (1-m+x) \ln (1-m+x) + \\ & + (1+m-x) \ln (1+m-x)+(1-m-x) \ln (1-m-x) ] \end{split} \label{EQ7} \end{equation} In the expression above, $m=m(T^{*},H^{*},P)$ and $x=x(T^{*},H^{*},P)$ are the equilibrium order parameters obtained after minimization of the free-energy (\ref{EQ5}). In addition, for a given caloric effect, the entropy change should depend on the (secondary) tunning field. For instance, the pressure-tune MCE at a given constant value of $P$, is characterized by the entropy difference $\Delta S(T,0 \rightarrow H,P) = S(T,H,P)-S(T,0,P)$. Alternatively, the magnetic field tuned BCE depends on the value of the (secondary) applied magnetic field $H$ and it is given by $\Delta S(T,H,0 \rightarrow P) = S(T,H,P)-S(T,H,0)$. Notice that by tunning the secondary field, it is possible to adjust the most optimum temperature range for the caloric effect. Moreover, in the case of the multicaloric effect the corresponding entropy change is given by $\Delta S(T,0\rightarrow H,0\rightarrow P)= S(T,H,P)-S(T,0,0)$ and both, pressure and magnetic field, are applied/removed simultaneously (or sequentially). Given that the entropy is a state function that depends only on the current state of the system, it is easy to show that \cite{Planes2014} \begin{equation} \begin{split} & \Delta S(T,0 \rightarrow H,0 \rightarrow P) = \\ & \Delta S_{MCE} (T,0 \rightarrow H, 0)+ \Delta S_{H-BCE} (T,H, 0 \rightarrow P) = \\ &\Delta S_{BCE} (T,0,0 \rightarrow P)+ \Delta S_{P-MCE} (T,0 \rightarrow H, P), \end{split} \label{EQ8} \end{equation} where $\Delta S_{MCE}$ stands for MCE, $\Delta S_{P-MCE}$ for P-tune MCE, $\Delta S_{BCE}$ for BCE and $\Delta S_{H-BCE}$ for H-tune BCE. For the sake of clarity we shall keep this notation along the present work. When the external field is changed adiabatically, the subsequent temperature change can be expressed as \begin{equation} \Delta T(0\rightarrow\,Y) = -\int_{0}^{Y}{\frac{T}{C}\left( \frac{\partial S}{\partial Y}\right)_{T} dY }= -\int_{0}^{Y}{\frac{T}{C}\left( \frac{\partial X}{\partial T}\right)_{Y} dY }, \label{EQT9} \end{equation} where again we have used the appropriate Maxwell relation. $C$ is the heat capacity and the external field is varied from $Y=0$ to $Y\neq 0$. Note that the previous thermodynamic expression (\ref{EQT9}) involves the total entropy of the system. Nevertheless, the model entropy in Eq. (\ref{EQ7}) only accounts for the magnetic contribution. Consequently, such entropy returns values for the calculated adiabatic temperature variations (\ref{EQT9}) definitively unphysical. To improve this, we consider the lattice contribution per particle in the Debye approximation, given by \cite{Oliveira2010} \begin{equation} S_{v} = k_B \Biggl[-3\ln\left(1-e^{-\frac{T}{\theta_{D}}}\right) +12\left(\frac{T}{\theta_{D}}\right)^{3}\int_{0}^{\theta_{D}/T}\frac{x^{3}}{e^{x}-1}dx\Biggr], \label{EQTS} \end{equation} $\theta_{D}$ being the Debye temperature. We now proceed by merely appending expression (\ref{EQTS}) to the magnetic entropy (\ref{EQ7}). Physically, this additional term plays the role of a thermal bath (or reservoir) replacing the effects of the remaining degrees of freedom not considered explicitly in the model. This is a quite usual approach in Statistical Mechanics. We notice that eventual influences due to a volume dependence of the lattice entropy contribution or electronic effects are not considered explicitly here. Nevertheless, when looking at the behaviour of a given specific material, such effects can be relevant and therefore should be taken into account. \section{Field-induced Ferromagnetic Transition} \label{Ferro} In this section we briefly summarize the main results obtained by solving numerically the minimal model that allows for a discontinuous $\cal{P}$-to-$\cal{F}$-phase transition, involving volume variation, under the application of an external field (either $P$ or/and $H$). This corresponds to set $\alpha_2^*$=0 in eq.(\ref{EQ5}), obtaining the following free energy function: \begin{equation} \begin{split} & g^* = -\frac{m^2}{2} - T^* \ln 2 + \frac{T^*}{2} [ (1+m) \ln (1+m) + \\ & + (1-m) \ln (1-m) ] - \frac{1}{8 \alpha_0^*}(\alpha_1^* m^2 + 2 \Omega^* P)^2 - H^* m, \label{EQ9} \end{split} \end{equation} where the ${\cal AF}$ order parameter is $x=0$ for all range of $T^*$ and $\alpha_1^*$. For given values of the external fields, a further direct numerical minimization of (\ref{EQ9}) with respect to $m$ renders the thermodynamical solutions for $m$($T^*$,$H^*$,$P$). Afterwards, it is possible to compute all thermodynamic quantities of interest. In the present study we restrict ourselves to some representative results. Firstly, figure \ref{FIG1} shows the phase diagram as a function of the coupling parameter $\alpha_1^*$, for $ H^*$=0 and for three different values of the pressure $P$=0, 0.05, 0.1, as indicated. Each curve exhibits two tricritical points ($\left(\alpha_{1t}^*\right)_{\pm}$, $T^{*}_{t}$) that change with the external pressure $P$. For $\left(\alpha_{1t}^*\right)_{-} < \alpha_1^*< \left(\alpha_{1t}^*\right)_{+}$ the transition is continuous whereas for $\alpha_1^*>\left(\alpha_{1t}^*\right)_{+}$ and $\alpha_1^*<\left(\alpha_{1t}^*\right)_{-}$ it is discontinuous. The Curie temperature $T_c^*$ for the continuous transition \footnote{It can be obtained from a Landau expansion of the free energy (\ref{EQ9}) and then require that the harmonic coefficient be equal to zero} is given by $T_c^*(P^*) = 1 -\frac{\alpha_1^*}{\alpha_0^*} \Omega P$. An inspection of Fig.\ref{FIG1} reveals that for $P=0$ the sign of $\alpha_1^*$ is irrelevant whereas under the application of an external pressure, $T_c^*$ (continuous line) may decrease or increase with $P$, depending on whether the coupling parameter $\alpha_1^*$ is positive or negative respectively. Beyond the tricritical points, the transition temperature for the first order transition (dashed line) increases with $\alpha_1^*$, regardless its sign. Below, we summarize the main results obtained for the MCE and BCE behaviours for a representative value of $\alpha_1^*= \pm$ 1.10, for which the transition is discontinuous. Figure \ref{FIG2} displays the temperature behavior of the MCE for different external fields. In the upper panels we have plotted (a) the isothermal entropy change $\Delta S_{MCE}(T,0\rightarrow H^*, P=0)$ and (b) the adiabatical temperature shift $\Delta T_{a}^*(0\rightarrow H^*, P=0)$, for increasing values of the applied magnetic field (denoted by an arrow). Both behaviours are consistent with a conventional MCE. To illustrate the behavior of the external pressure on the MCE, we have plotted the $\Delta S_{P-MCE}$ at $P=0.05$ for $\alpha_1^*=1.1$ (c) and $\alpha_1^*=-1.1$ (d). The effect of $P$ is to shift the MCE peak either to lower (c) or higher (d) temperatures depending to sign of $\alpha_1^*$, accordingly to the tendency of promoting the phase with lower volume. \begin{figure}[ht] \centering \includegraphics[clip,scale=0.5]{Figure1.eps} \caption{(Color online) Transition temperature versus the coupling parameter $\alpha_1^*$ for $H^{*}=0$. The three curves correspond to selected values of the pressure $P$ as indicated. Second order transitions are denoted by solid lines whereas first order transitions by dashed lines. Both curves intersect at the two tricritial points $\left(\alpha_t^*\right)_{+}$ and $\left(\alpha_t^*\right)_{-}$.} \label{FIG1} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[clip,scale=0.5]{Figure2.eps} \end{center} \caption{(color online) MCE under the application of increasing values of the magnetic field $H^*$ (denoted by an arrow). (a) Isothermal entropy change at $P=0$, (b) adiabatic temperature change at $P=0$, (c) and (d) isothermal entropy change at $P=0.05$ for $\alpha_1^*=1.10$ and $\alpha_1^*=-1.10$ respectively.} \label{FIG2} \end{figure} \begin{figure}[ht] \centering \includegraphics[clip,scale=0.5]{Figure3.eps} \caption{(Color online) BCE for two representative values of $\alpha_1^*=1.10$ (left column) and $\alpha_1^*=-1.10$ (right column) for increasing values of the applied pressure $P^*$ . Panels (a) and (b) display the isothermal entropy change at $H^*=0$, (c) and (d) the corresponding adiabatic temperature shift and (e) and (f) the isothermal entropy change at $H^*=0.02$.} \label{FIG3} \end{figure} The results for the BCE are shown in Fig. \ref{FIG3}. As expected, one obtains different behaviors depending on the sign of $\alpha_1^*$. Essentially, for $\alpha_1^*$=1.10 the BCE is inverse whereas for $\alpha_1^*$=-1.10 it is conventional. Consequently, the entropy increases (a) or decreases (b) when the pressure is applied isothermally. Likewise, the system cools down (c) or warms up (d) when the pressure is applied adiabatically. The effect of $H^*$ on the BCE is shown in the lower panels of the same figure for $\alpha_1^*=1$ (e) and $\alpha_1^*=-1$ (f). As can be observed, the application of the secondary field $H^*$ shifts the caloric response towards higher temperatures and reduces the peak, regardless of the sign of $\alpha_1^*$. This reflects the natural tendency of the external $H^*$ to promote the (ordered) $\cal{F}$-phase with lower entropy. In summary, the effect of increasing $H^*$ on the BCE is to attain higher temperatures, at expenses of reducing the caloric response. \section{The Metamagnetic Transition} \label{Inversion} The model for the metamagnetic transition corresponds to switch on the parameter $\alpha_2^*$ in the free-energy model defined in eq. (\ref{EQ5}). Notice that this parameter gives rise to an inversion in the effective exchange constant that renders the $\cal{AF}$ stable at low temperatures. It is worth mentioning that the importance of magnetostriction in the occurrence of the $\cal{F} \rightarrow \cal{AF}$ metamagnetic transition was first pointed out by Kittel\cite{Kittel1960}. For the following calculation we also take $|\alpha_1^*|=1$ in order to favor discontinuous transitions. In that case, both order parameters, $x$ and $m$, may be different from zero and the variation in the volume $w$ will depend on the sign of both $\alpha_1^*$ and $\alpha_2^*$. Standard numerical minimization of the reduced Gibbs free-energy (\ref{EQ5}) predicts the occurrence of an antiferromagnetic $\cal{AF}$-phase at low temperatures, as it can be seen in the phase diagram shown in Fig. \ref{FIG4}(a). In this figure we have plotted the behavior (in absence of external fields) of the different transition temperatures as a function of the coupling parameter $\alpha_2^*$ restricted to positive values for the sake of clarity \footnote{The phase diagram in the region corresponding to $\alpha_2^*<0$ is specularly similar with respect to $\alpha_2^*=0$.}. That is, the Curie temperature ${T^*}_{C}$ ($\cal{P}$-$\cal{F}$), the Neel temperature ${T^*}_{N}$ ($\cal{P}$-$\cal{AF}$) and the metamagnetic transition temperature ${T^*}_{M}$ ($\cal{AF}$-$\cal{F}$). The $\cal{AF}$-phase exists only for values of the coupling parameter $\alpha_{2}^{*}>\alpha_{2c}^{*}$, where $\alpha_{2c}^{*}$ satisfies\footnote{It can be easily derived by imposing that the energy of both ${\cal F}$ and ${\cal AF}$ phases be equal at $T=0K$.}: \begin{equation} (\alpha_{2c}^*)^2-(\alpha_{1}^*)^2- 4 \Omega^* P (\alpha_{2c}^*-\alpha_{1}^*)-8 \alpha_0^* (1+H^*)=0 \label{EQ13} \end{equation} The temperature range at which the $\cal{AF}$-phase exists increases with the coupling strength $\alpha_{2}^*$. There is a particular value, $\alpha_{2t}^*$, at which the three phases $\cal{P}$, $\cal{F}$ and $\cal{AF}$ coexist. Thus, for $\alpha_{2c}^*<\alpha_2^*<\alpha_{2t}^*$, the model predicts two consecutive phase transitions whereas for $\alpha_2^*>\alpha_{2t}^*$ the $\cal{F}$-phase disappears and the model exhibits an unique $\cal{P}$-to-$\cal{AF}$-phase transition at ${T^*}_{N}$. Let us focus on the region of the phase diagram where the metamagnetic transition exists and take $|\alpha_2^*|=3.05$ (green line in Fig. \ref{FIG4}(a)). In the lower panels of Fig.\ref{FIG4} we show the corresponding behavior of both transition temperatures, ${T^*}_{C}$ and ${T^*}_{M}$, with applied external $P$ (b) for $\alpha_2^*=3.05$ and (c) $\alpha_2^*=-3.05$ respectively. In both cases we explicitly distinguish between $\alpha_1^*=1$ (blue) and $\alpha_1^*=-1$ (red). One observes that whereas for $\alpha_2^*= 3.05$ the application of $P$ tends to suppress the ${\cal AF}$-phase rapidly, for $\alpha_2^*=- 3.05$ the application of $P$ definitively renders the ${\cal AF}$-phase favorable. Interestingly, the behavior displayed in Fig. \ref{FIG4} (b) and (c) embodies whether the BCE is conventional (increasing transition temperature with increasing $P$) or inverse (decreasing transition temperature with increasing $P$). \begin{figure}[ht] \centering \includegraphics[clip, scale=0.5]{Figure4.eps} \caption{(Color online) (a) Phase diagram for the exchange-inversion model in the region of positive $\alpha_{2}^*$ at $H^{*}=0$, $P=0$ and $\alpha_{1}^*=1$. The green dashed line denotes the value of $\alpha_{2}^*=3.05$ set for the present calculations. Lower panels show the pressure behavior of the corresponding transition temperatures $T_{C}^*$ and $T_{M}^*$ in the case of $\alpha_{2}^*=3.05$ (b) and $\alpha_{2}^*=-3.05$ (c). Results are shown distinctly for $\alpha_{1}^*=1$ (blue) and $\alpha_{1}^*=-1$ (red).} \label{FIG4} \end{figure} In Fig. \ref{FIG5} we show the MCE at different values of $H^*$ ranging from $H^*=0$ to $H^*=0.04$. The increasing stability of the ${\cal F}$ -phase is reflected in the decrease of $T_{M}^*$ and the simultaneous increase of $T_{C}^*$ with increasing $H^*$. In connexion with this, near the ${\cal P}$-to-${\cal F}$ transition ($T_{C}^*$), the MCE is conventional while it is inverse at lower temperatures, around the ${\cal F}$-to-${\cal AF}$ transition ($T_{M}^*$). Moreover, the conventional MCE peak increases with $H^*$ whereas the inverse MCE peak decreases. This apparent contradiction regarding the behavior of the inverse MCE around the $\cal{F}$-to-$\cal{AF}$ transition has to do with the opposite effect that the application of $H^*$ has on the entropy of both $\cal{F}$- and $\cal{AF}$- phases. In this sense, the model predicts a sharp suppression of the $\cal {AF}$-phase that hinders a further increase of the entropy with increasing $H^*$. To complete the discussion on the MCE, it is worth mentioning that, in adiabatic conditions, the system will first warm up (at high temperatures) and next cool down (at low temperatures) with the application of external $H^*$. The effect of an external $P$ on the MCE is displayed in the next figure \ref{FIG6} where we have plotted the corresponding isothermal entropy change at $P=0.015$ and selected values of the applied magnetic field ranging from $H^*=0$ to $H^*=0.05$. Results have been calculated for the different values $\alpha_{2}^*= \pm 3.05$ and $\alpha_{1}^* = \pm 1$ considered previously. In general, the effect of the secondary field is a temperature shift in the corresponding caloric peak. As already mentioned, such displacement along the temperature axis should be consistent with the behavior of the transition temperatures displayed in figures \ref{FIG4}(b) and \ref{FIG4}(c). Indeed, an inspection of Fig. \ref{FIG6} reveals that for $\alpha_2^*=3.05$ the effect of $P$ on the inverse MCE peak (around $T_{M}^*$) is a shift to lower temperatures and a decay in the response whereas for $\alpha_2^*=-3.05$ the response gets enhanced and shifted to higher temperatures. Notice that for $\alpha_{1}^*=-1$ the effect is dramatic since the application of $P$ induces a further promotion of the $\cal{F}$-phase. Regarding to the behavior of the conventional MCE around $T_{C}^*$, it has been already discussed before. Similarly, the temperature shift in the peaks follow the trends described in Fig. \ref{FIG4}(b) and (c). \begin{figure}[ht] \centering \includegraphics[clip, scale=0.5]{Figure5.eps} \caption{(Color online) MCE for different values of the external magnetic field ranging from $H^*=0$ to $H^*=0.05$. $T_{C}^*$ and $T_{M}^*$ denote the ${\cal P}$-to-${\cal F}$ and ${\cal F}$-to-${\cal AF}$ transition temperatures respectively. As usual, the arrow denotes the direction of increasing $H^*$.} \label{FIG5} \end{figure} \begin{figure}[ht] \centering \includegraphics[clip, scale=0.5]{Figure6.eps} \caption{(Color online) P-tune MCE for values of the applied external field ranging from from $H^*=0$ to $H^*=0.05$ and $P=0.015$. Results are displayed distinctly for $\alpha_{1}^*=\pm 1$ and $\alpha_{2}^*=\pm 3.05$.} \label{FIG6} \end{figure} \begin{figure}[ht] \centering \includegraphics[clip, scale=0.5]{Figure7.eps} \caption{(Color online) $\Delta S_{BCE}$ for increasing selected values of the applied pressure ranging from $P=0$ to $P=0.035$. Results are displayed distinctly for $\alpha_{1}^*=\pm 1$ and $\alpha_{2}^*= \pm 3.05$.} \label{FIG7} \end{figure} \begin{figure}[ht] \centering \includegraphics[clip, scale=0.5]{Figure8.eps} \caption{(Color online) $\Delta S_{H-BCE}$ for increasing values of the applied pressure from $P=0$ to $P=0.035$ and $H^*=0.02$ Results are displayed distinctly for $\alpha_{1}^*=\pm 1$ and $\alpha_{2}^*= \pm 3.05$.} \label{FIG8} \end{figure} The results for the BCE behavior are shown in the next figure \ref{FIG7} for the same values of the coupling parameters. The isothermal entropy change $\Delta S_{BCE}$ is displayed for selected values of the applied pressure ranging from $P=0$ to $P=0.035$. Whereas the characteristics (whether it is inverse or conventional) of the high temperature peak around the $\cal{P}$-$\cal{F}$ transition depends on the sign of $\alpha_{1}^*$, $\alpha_{2}^*$ determines the characteristics of the low temperature peak around the metamagnetic transition. Thus, for $\alpha_{2}^*=3.05$ (panels (a) and (b)) the BCE around $T_{M}^*$ is inverse due to the suppression of the ${\cal AF}$-phase with increasing the applied pressure $P$ (Fig. \ref{FIG4} (b)). Similarly, in the case of $\alpha_{2}^*=-3.05$ the low-temperature BCE is conventional. Furthermore, with increasing $P$, the BCE peak gets larger for $\alpha_{1}^*=1$ and smaller for $\alpha_{1}^*=-1$. This is due to the fact that for positive $\alpha_{1}^*$ the application of $P$ favors the disordered $\cal{P}$-phase (in detriment of the $\cal{F}$-phase, with larger volume) while for negative values of $\alpha_{1}^*$ the ordered $\cal{F}$-phase is promoted. Concerning the second BCE peak around $T_{C}^*$, it is inverse for $\alpha_{1}^*=1$ and conventional for $\alpha_{1}^*=-1$, consistently with the behavior of $T_{C}^*$ vs. $P$ shown in figures \ref{FIG4}(b) and \ref{FIG4}(c). To complete this section, in the different panels of figure \ref{FIG8} we have included the effect of the secondary field ($H^*=0.02$) on the previous BCE. A simple comparison with Fig. \ref{FIG7} reveals that the effect of applying a magnetic field is to move away one peak from the other and simultaneously to decrease the caloric response, regardless of the model parameters. In summary, the application of $H^*$ systematically reduces the response of the BCE and increases the stability of the ($\cal{F}$) phase. \section{Relation to Experiments} \label{Discussion} In this section we analyze the previous theoretical results in relation to the different caloric behaviors observed in magnetic and metamagnetic materials for which experimental data is available. We stress that discussion on the physical origin or mechanism behind the magnetoelastic coupling is out of the scope here. Rather, we shall just require that the observation of the magnetic phase transition be accompanied by some volume anomaly. Below, we appraise our model predictions, namely phase diagram and caloric responses, by comparing them with experiments in the case of two potential magnetic refrigerant materials, $La_{(1-x)}Ca_{x} MnO_{3}$ and $FeRh$. Qualitative information regarding general aspects such as whether the caloric effect is conventional or inverse and the behavior (i.e. the temperature shift) of the caloric peak under the application of a secondary field can be inferred directly from the phase diagram. Even so, the maximum value of the caloric response, either on $\Delta S_{T}$ or $\Delta T_{a}$ might depend on other aspects or contributions not described properly (or not described) in the model. \subsection{The $La_{(1-x)}Ca_{x} MnO_{3}$ CMR system} Few years ago, very much attention was given to the study of the $La_{(1-x)}Ca_{x} MnO_{3}$ perovskite because of the unexpected large magnetoresistance observed at low temperatures \cite{Jim1994}. As a function of temperature and doping ($x$), this material shows different magnetic transitions \cite{Schiffer1995}. When lowering the temperature, it exhibits a $\cal{P} \rightarrow \cal{F}$ transition for $x \leq 0.50$ and a $\cal{P} \rightarrow \cal{AF}$ transition for $x \geq 0.5$. From the point of view of the present model, such a different magnetic behavior can be taken into account by recalling that both coupling coefficients, $\alpha_{1}^*$ and $\alpha_{2}^*$, are composition dependent. In figure \ref{FIG9} we present the results obtained for the transition temperature assuming a quadratic dependence with doping for both coefficients. The present numerical results (denoted by a continuous line) have been obtained by taking $\alpha_{1}^*= 24.22 (x-0.38)^2-2.5$ and $\alpha_{2}^*=-22.45 (x-0.64)^2 +3.8$. The corresponding estimation for the exchange constant is $zJ=16.6$meV, close to the values (6.6-10.7) meV reported in the literature\cite{Nicastro2002}. The model results are compared with available experimental data taken from different authors, as indicated in the inset. We might conclude that the agreement is remarkable. The unusual deviation around $x \sim 30\%$ is attributed\cite{DeTeresa1996,sun1,sun2,Zhang1996} to differences in the method used in preparing the sample. Interestingly, close to $x \sim 0.50$ the ground state changes from $\cal{F}$ to $\cal {AF}$ although the direct $\cal{F} \rightarrow \cal{AF}$ metamagnetic transition (if possible) will be restricted to a very narrow interval of values of $x$. Actually, metamagnetic transitions in $La_{(1-x)}Ca_{x} MnO_{3}$ have only been reported under the application of (low) external magnetic fields\cite{Ulyanov2008}. \begin{figure}[ht] \centering \includegraphics[clip, scale=0.5]{Figure9.eps} \caption{(Color online) Phase Diagram for $La_{1-x}Ca_{x}MnO_{3}$ as a function of the Ca content ($x$). Continuous line denotes the present numerical results obtained by assuming a quadratic dependence of the coupling parameters $\alpha_{1}^*$ and $\alpha_{2}^*$ with doping $x$. Points correspond to experimental data from different authors indicated in the inset.} \label{FIG9} \end{figure} \begin{figure}[ht] \centering \includegraphics[clip, scale=0.5]{Figure10.eps} \caption{(Color online). Effect of the pressure on the $\Delta S_{MCE}$ in $La_{0.69}Ca_{0.31}MnO_{3}$. (a) corresponds to experimental data taken from Ref.(47) and (b) displays the present numerical results for the same values of applied fields and pressures.} \label{FIG10} \end{figure} Moreover, perovskite manganites show a strong spin-lattice coupling \cite{Guo1997}. This makes the study of pressure effects on their magnetic behavior of potential interest. In fig. \ref{FIG10} we show the effect of pressure on the MCE in $La_{0.69}Ca_{0.31} MnO_{3}$. Panel (a) displays the experimental data\cite{sun1} for different values of the applied magnetic field and for two values of the applied pressure, $P\eqsim 0$ (ambient pressure) and $P= 1.1$GPa. In panel (b) we have plotted the present numerical results obtained for $\alpha_1^*= -2.38$ and $\alpha_2^*=1.36$ and for the same values of the fields as in panel (a). We obtain an estimation for the volume of the unit cell of $\Omega \eqsim 29(\AA)^3$, one half of the experimental value\cite{Radaelli1995,Nicastro2002} ($\sim 60(\AA)^3$) but with the right order of magnitude. Before continuing with the discussion of fig. \ref{FIG10}, let us point out that for these values of the coupling coefficients, $|\alpha_{2}^*|<|\alpha_{2c}^*|$ (defined in eq. (\ref{EQ13})). The model predicts an unique $\cal{P} \rightarrow \cal{F}$ transition at a temperature that increases with both applied magnetic field and pressure. In this situation, the parameter $\alpha_2^*$ is irrelevant and consequently the description can be done by means of the simplified model defined in section \ref{Ferro}, with $\alpha_1^*<0$. Indeed, our results preview that both MCE (fig. \ref{FIG2} (a) and (b)) and BCE (fig. \ref{FIG3} (b) and (d)) are conventional with behavior under external fields given in figures \ref{FIG2}(d) and \ref{FIG3}(f). We now return to figure \ref{FIG10}. A simple inspection reveals that in this material the main effect of pressure on the MCE is a simple shift of the whole response (almost unaltered) to higher temperatures consistently with the increasing stability of the $\cal{F}$-phase (with lower volume) with $P$. In conclusion, the model is able to reproduce the general experimental trends. Nevertheless, the amount of $\Delta S_{MCE}$ even though has the right order of magnitude is underestimate by a factor two (roughly). We attribute this to other entropy contributions, mainly electronic, not considered in the present model. Very briefly we would like to mention that similar behavior is observed in $La(Fe_{x}Si_{1-x})_{13}$ -type compounds. The field-induced first-order $\cal P$-to-$\cal F$ phase transition when lowering the temperature, is accompanied by a significant isotropic expansion of the volume and the application of an external $P$ reduces the Curie temperature\cite{Lyubina2008}. Again, the description of the general trends can be done by the simplified model (\ref{EQ9}) but now with $\alpha_1^* >0$. It has been reported that the MCE is conventional\cite{Fujita12003,Hu1,Manosa2011} whereas the BCE is inverse\cite{Manosa2011}. Indeed, results shown in figures \ref{FIG2}(a) and \ref{FIG3}(a) are consistent with such experimental behavior. Additionally, the tunning of the MCE by an external pressure shifts the whole caloric effect towards lower temperatures \cite{Lyubina2008} and the BCE exhibits a negative adiabatic temperature change \cite{Manosa2011}. These trends are reproduced in figures \ref{FIG2}(c) and \ref{FIG3}(c). \subsection{The FeRh metamagnetic alloy} The $B2$ ordered near-equiatomic $Fe_{1-x}Rh_{x}$ alloy displays a metamagnetic transition from an $\cal{AF}$ ground state to a $\cal{F}$-phase with increasing temperature. It occurs around $T \sim 320K$ and it is accompanied by a $1\%$ volume increasing in the unit cell that preserves the cubic symmetry \cite{Kouvel1966,Nikitin1992,Gruner2003,Barua2013} . This singular transition is strongly concentration dependent \cite{Staunton2014} and it is only present for a very narrow range of the composition ($0.48 \leq x \leq 0.52$)\cite{Barua2013}. Additionally, it also depends on heat treatment \cite{Annaorazov1996}, configurational ordering \cite{Staunton2014,Sandratskii2011} and external fields. Of special interest is the study of pressure effects on the magnetic behavior \cite{Ponyatovskii1968,Heeger1970,Dubovka1974,Vinokurova1976,Gruner2005}. The $\cal{F}$-phase, in between the $\cal{P}$-phase (at high temperatures) and the $\cal{AF}$-phase (at low temperatures), exists only for values of the applied pressure below $\sim 6$ GPa (tricritical pressure). For higher pressures the metamagnetic transition disappears and the $\cal{AF}$-phase transforms into the $\cal{P}$-phase directly. In the next figure \ref{FIG11} we show the $P$-$T$ phase diagram for the nominally equiatomic $Fe_{(1-x)}Rh_x$ ($x \eqsim 0.5$). Continuous (blue) lines denote the present results for $\alpha_1^*= 1$ and $\alpha_2^*=-3.0$ (see fig. \ref{FIG4}) whereas points correspond to experimental data taken from different authors indicated in the inset. The fitting to the experimental data renders the following estimations $zJ=5.66$meV and $\Omega=36(\AA)^3$, comparable to recently reported values for the exchange constant \cite{Kudrnovsky2015} and the lattice parameter \cite{Barua2013} respectively. The misfit between theory and experiments ($\sim 10 \%$ around the tricritical point) is partially due to the more attention given in the fitting procedure at the behavior close to P=0. In spite of this, we conclude that the agreement is satisfactory. \begin{figure}[ht] \centering \includegraphics[clip, scale=0.5]{Figure11.eps} \caption{(Color online) $P$-$T$ phase diagram for the equiatomic $FeRh$ alloy. The present numerical results (blue line) are compared with available experimental data taken from different authors indicated in the inset.} \label{FIG11} \end{figure} \begin{figure}[ht] \centering \includegraphics[clip, scale=0.5]{Figure12.eps} \caption{(Color online) Adiabatic temperature change in the MCE at H=2.1T in FeRh alloy. Symbols correspond to experimental data from Ref. (27) whereas the continuous line indicate the present results.} \label{FIG12} \end{figure} Concerning the caloric behavior in $FeRh$ metamagnetic alloy near the ${\cal F}$-${\cal AF}$ transition, experiments show that the MCE is inverse \cite{Annaorazov1996,Stern2014} while the BCE is conventional \cite{Stern2014}. Under the application of an external pressure this transition temperature increases while the Curie temperature decreases \cite{Heeger1970}. This scenario is reproduced by the present theoretical predictions shown in figures \ref{FIG6} (c) and \ref{FIG7}(c). In Figure \ref{FIG12} we show the cooling by the adiabatic magnetization (as expected for an inverse MCE) as observed near the metamagnetic transition in $FeRh$. Experiments \cite{Annaorazov1996} are denoted by symbols whereas the continuous line corresponds to our results. These last have been obtained from the entropy curves by requiring that $S(T_f, H=2.1T,P=0)=S(T,0,0)$ and using $\Theta_D=400K$. Experimentally, the maximum cooling at $H=2.1T$ is of $\Delta T_{exp}=-8 K$ whereas we obtain $\Delta T= -4.9K$. Unfortunately the present model predicts a value for the entropy change (either in the MCE or BCE) in $FeRh$ one order of magnitude below the experimental value\cite{Stern2014,Annaorazov1996} ($|\Delta S_{exp}|= 12J K^{-1}Kg.^{-1}$). Principally this is due to the subtle balance between the different entropy contributions \cite{Cooke2012} $\Delta S_{mag}$ (magnetic), $\Delta S_{v}$ (lattice) and $\Delta S_{elec}$ (electronic) and the crucial role played by this last in ensuring a large enough value for the total amount of the entropy change. Although still under debate, it is accepted that in $FeRh$ the $\cal{AF} \rightarrow \cal{F}$ metamagnetic transition is driven by an excess of electronic and magnetic entropy while the lattice opposes to the transition. Roughly speaking $\Delta S_{v} \eqsim -70 \%\Delta S_{mag}$ and $\Delta S_{elec}$ represents a $40\%$ of the total $\Delta S$. This balance makes our model- that does not consider the electronic contribution- unqualified to obtain a reasonable value of the entropy change in this material. Nevertheless it predicts a quite acceptable value for the adiabatical temperature change due to the satisfactory description of the $P$-$T$ phase diagram. In this regard, it should be mentioned that completely adiabatic conditions are very difficult to achieve experimentally. Finally let us noting a recent study\cite{Barua2013} aimed at finding out magnetoestructural trends in FeRh-based alloys. In particular, the behavior of the transition temperature as a function of the valence electron per atom seems to confirm the importance of the electronic effects on the transition. Also, these results seem to indicate that magnetovolumic effects are not essential for the transition although they are crucial in stabilizing the low temperature $\cal{AF}$- phase. \section{Conclusions} \label{conclusions} We present a mean-field Landau-based model for phase transitions that captures the main ingredients necessary to reproduce the phase diagram and the general trends of the experimental caloric behavior observed in magnetoelastic materials in response to the application of external fields, either magnetic or/and hydrostatical pressure. In particular, we have applied the results to $LaCaMnO_3$ perovskite and to $FeRh$ metamagnetic alloy. Such materials are very different but have the common feature of undergoing a magnetic phase transition accompanied by magnetoelastic effects. This is enough for the model to be able to reproduce both phase diagrams to a very good level of agreement with the experiments. The main limitation of the model is to predict the correct order of magnitude for the entropy change at the metamagnetic transition. Apparently, this is due to the fact that it includes the magnetic degrees of freedom only disregarding the role of the electronic contribution that in this material turns out to be very important. Concerning the lattice contribution, it plays the role of a thermal bath for the adiabatic caloric process. In this sense, Gruner {\it et al.}, by performing Monte Carlo simulations of a spin-based model extended to include magnetovolumic effects, were able to obtain a value for the entropy change within the range of the experimental results \cite{Gruner2003}. This could be indicative of the importance of fluctuations in the occurrence of metamagnetic transitions. Additionally, coupling coefficients can be evaluated from first principle calculation thus providing an estimation independent on the model. \begin{acknowledgments} This work has received financial support from CICyT (Spain), Project No. MAT2013-40590-P. One of us (E.M.) thanks the Spanish Ministery of Eduaction, Culture and Sports for the fellowship for collaboration with the Dept. d'Estructura i Constituents de la Materia (UB) during his last year of undergraduate student in Physics. \end{acknowledgments}
{ "timestamp": "2015-04-15T02:07:11", "yymm": "1504", "arxiv_id": "1504.03479", "language": "en", "url": "https://arxiv.org/abs/1504.03479" }
\section{Introduction} Graph states \cite{BriegelRaussendorf2001-FirstGrSt, RaussendorfBriegel2001-05,RaussendorfBriegel2002-10,RaussendorfBrowneBriegel2003-08,RaussendorfPHDThesis2003,HeinEisertBriegel2004-06, GraphStateReviews2006} represent specific multipartite entangled quantum systems. They are an important resource for measurement-based quantum computation: there, the multipartite entanglement of cluster states (a special class of graph states) is consumed by local measurements on subsystems. Depending on the measurement outcomes, local unitary transformations of the remaining systems are performed. In this way, certain quantum operations can be implemented. \par Graph states can be represented in the stabilizer formalism as eigenstates of certain tensor products of Pauli $\sigma_X$- and $\sigma_Z$-operators (the graph state stabilizers). The explicit structure of the stabilizer operators depends on the structure of the underlying graph. The stabilizers form a group (under multiplication), which is generated from $n$ generators, where $n$ is the number of vertices of the graph. \par In this paper, we will discuss what we call X-chains. X-chains are subsets of vertices of a given graph which correspond to graph state stabilizers that consist \emph{only} of Pauli $\sigma_X$-operators. We will show that these X-chains form a group. Not every graph contains an X-chain. However, it will be shown that if a graph does contain X-chains, this fact can be used as an efficient tool to determine essential properties of the corresponding graph state, such as its overlap with other graph states, its entanglement characteristics, and the existence of error correcting code words in subsystems of graph states. Note that the overlap of two graph states cannot be determined efficiently to date. The X-chains provide an efficient method to solve this problem. \par While graph states are usually given in the Z-basis, the concepts and methods developed in this paper show that it is often favorable to represent graph states in the X-basis, in particular when one wants to study overlaps of graph states or determine their entanglement properties. The reason for this fact is that for all graph states originating from the same number of vertices, the probability distribution of outcomes of local Z-measurements is uniform, while it is nonuniform for outcomes of local X-measurements. Different X-measurement outcomes of two graph states reflect their difference in the X-chain groups, as the existence of an X-chain in a graph state implies vanishing probability of certain X-measurement outcomes. Conversely, X-chain groups of graph states determine their representation in the X-basis. \par In the present paper we will focus on introducing the concept of X-chains, illustrating it with examples, and presenting some applications. The X-chain group of a given graph state can be efficiently determined, the search of X-chains in a given graph state will be studied in detail elsewhere \cite{WuKampermannBruss2015-XChainAlgo} and a MATHEMATICA package is available in the Supplemental Material \cite{XchainMpackage_Wu}. \par This paper is organized as follows. In section \ref{sec::basic_concepts}, we review the essential concepts of graph theory and graph states. In section \ref{sec::representation_of_graph_states}, we review the representation of graph states in the Z-basis and point out its disadvantage in distinguishing graph states. Then we introduce X-chains and study their properties in section \ref{sec::X-chains}. The representation of graph states in the X-basis is derived via the so-called X-chain factorization in section \ref{sec::X-chain_factorization_of_graph_states}, where we show how X-chain groups feature the X-measurement outcomes on graph states. In section \ref{sec::application_of_X-chains}, we discuss several applications of X-chains, namely the calculation of the overlap of two graphs states (section \ref{sec::graph_state_overlap}), the Schmidt decomposition of graph states in the X-basis (section \ref{sec::schmidt_decomposition}) and the entanglement localization \cite{PoppVMCirac2005-LocalizableEnt} of graph states against errors (section \ref{sec::unilateral_projection_against_errors}). The proofs are presented in Appendix \ref{apdx::proofs_for_X-chains}. A list of notations and symbols is given in Appendix \ref{apdx::list_of_notations}. \subsection{Basic concepts} \label{sec::basic_concepts} Here we review the concepts of graphs \cite{Diestel_GraphTheory} and graph states \cite{HeinEisertBriegel2004-06,GraphStateReviews2006}, and introduce the notation used in the main text.\par \bigskip \paragraph{Graph theory \cite{Diestel_GraphTheory}:} \hiddengls{graph} \hiddengls{vertexOfG} \hiddengls{edgeOfG} \hiddengls{neighborhoodV} \hiddengls{inducedG} A \emph{graph} $G=(V,E)$ consists of $n$ vertices $V$ and $l$ edges $E$. The \emph{vertices}, denoted by $V_{G}=\left\{ v_{1},...,v_{n}\right\} $, are depicted as dots and represent locations, particles etc. The \emph{edges}, denoted by $E_{G}=\left\{ e_{1},...,e_{l}\right\} $, describe a relation network between vertices. A symmetric relation between two vertices $v_{1}$ and $v_{2}$, e.g. a two-way bridge between two islands, can be represented by the vertex set $e=\{v_{1},v_{2}\}$, which is called \emph{undirected edge}. Let $\xi_{a},\xi_{b}\subseteq V_{G}$ be two subsets of $V_{G}$, then the edges between $\xi_{a}$ and $\xi_{b}$ are the edges $e=\{v_{a},v_{b}\}$, which have one vertex $v_{a}\in\xi_{a}$ and another vertex $v_{b}\in\xi_{b}$. The set of these edges is denoted by $E_{G}(\xi _{a}:\xi_{b})$. A vertex $v_{1}$ is a \emph{neighbor} of $v_{2}$, if they are connected by an edge. The set of all neighbors of $v$, called the neighborhood of $v$, is denoted as $N_{v}$. In Table \ref{table::different_types_of_graphs} we list two of the relevant types of graphs, which will be considered in the main text. \begin{table}[t] \centering \input{tabular_different_types_of_graphs} \caption{The graphs considered in this paper.} \label{table::different_types_of_graphs} \end{table} \par A graph $F$ is a \emph{subgraph} of $G$, if its vertices and edges are subsets of the vertex set and the edge set of $G$, respectively, i.e., $V_{F}\subseteq V_{G}$ and $E_{F}\subseteq E_{G}$. A subgraph induced by a vertex set $\xi\subseteq V_{G}$ is defined as the graph \begin{equation} G[\xi]:=(\xi,E_{G}(\xi:\xi)), \end{equation} which has the edge set $E_{G}(\xi:\xi)$ consisting of edges between vertices inside the set $\xi$. \par \paragraph{Binary notation:} \hiddengls{iState} In this paper, we use binary numbers to denote a subset of vertices of graphs. Let $G$ be a graph with vertices $V_{G}=\left\{ 1,...,n\right\} $ and $\xi\subseteq V_{G}$ be a vertex subset. We denote the binary number of $\xi$ as% \begin{equation} i^{(\xi)}:=i_{1}\cdots i_{n}, \end{equation} with \[ i_{j}=\left\{ \begin{array} [c]{cc}% 0 & ,j\not \in \xi\\ 1 & ,j\in\xi \end{array} \right. . \] For example, in a 4-vertex graph, $0110=i^{\left\{ 2,3\right\} }$. The tensor product of Pauli-operators $\sigma_{\alpha}$ with $\alpha\in\{x,y,z\}$ is denoted as% \begin{equation} \sigma_{\alpha}^{(\xi)}:=\sigma_{\alpha}^{i_{1}\left( \xi\right) }\otimes\cdots \otimes\sigma_{\alpha}^{i_{n}\left( \xi\right) }, \end{equation} with $\sigma_{\alpha}^{0}=\id,\sigma_{\alpha}^{1}=\sigma_{\alpha}$. For example, for $n=4$, $\sigma_{\alpha}^{\left\{ 2,3\right\} }:=\id\otimes\sigma_{\alpha}\otimes\sigma_{\alpha}\otimes\id$. \section{Representation of graph states} \label{sec::representation_of_graph_states} \hiddengls{gStateGenerator} \hiddengls{inducedS} \hiddengls{stabGroup} \hiddengls{powerset} We review the representation of graph states \cite{HeinEisertBriegel2004-06,GraphStateReviews2006}. A given graph with $n$ vertices has a corresponding quantum state by associating each vertex $v_i$ with a graph state \emph{stabilizer generator} $g_{i}$, \begin{equation} g_{i}=\sigma_{X}^{(i)}\sigma_{Z}^{(N_{i})}. \end{equation} Here, $N_{i}$ is the neighborhood of the vertex $v_i$. A graph state $|G\rangle$ is the $n$-qubit state stabilized by all $g_{i}$, i.e., \begin{equation} g_{i}|G\rangle=|G\rangle,\text{for all }i=1,...,n. \end{equation} The $n$ graph state stabilizer generators, $g_{i}$, generate the whole stabilizer group $\left( \mathcal{S}_{G},\cdot\right) $ of $|G\rangle$ with multiplication as its group operation. The group $\mathcal{S}_{G}$ is Abelian and contains $2^{n}$ elements. These $2^{n}$ stabilizers uniquely represent a graph state on $n$ vertices. Let us define the``induced stabilizer'', which is uniquely associated with a given vertex subset. \begin{definition}[Induced stabilizer]\label{def::induced_stabilizer} \hiddengls{inducedS} Let $G$ be a graph on vertices $V_{G}=\left\{ v_{1},v_{2},\cdots,v_{n}\right\} $. Let $\xi$ be a subset of $V_{G}$. We call the product of all $g_{i}$ with $i\in\xi$, i.e. \begin{equation} s_{G}^{(\xi)}:=\prod_{i\in\xi}g_{i}, \label{eq::induced_stabilizer}% \end{equation} the \emph{$\xi$-induced stabilizer} of the graph state $|G\rangle$. Here, $g_{i}$ is the graph state stabilizer generator of $|G\rangle$ associated with the $i$-th vertex. \end{definition} Since this $\xi$-induction map is bijective, it maps the group $\left( \mathcal{P}\left( V_{G}\right) ,\Delta\right) $ into the stabilizer group $\left( \mathcal{S}_{G},\cdot\right) $, where $\mathcal{P}\left( V_{G}\right) :=\left\{ \xi\subseteq V_{G}\right\} $ is the power set (the set of all subsets) of $V_{G}$ and $\Delta$ is the symmetric difference operation acting on two sets as $\xi_1\Delta\xi_2=(\xi_1\setminus\xi_2)\cup(\xi_2\setminus\xi_1)$. \begin{proposition} [Isomorphism of $\xi$-induction]% \label{prop::isomorphism_of_vertex-induction_operation} \Needspace*{7\baselineskip} Let $\left( \mathcal{S}_{G},\cdot\right) $ be the stabilizer group of a graph state $|G\rangle$ and $\mathcal{P}(V_{G})$ be the power set of the vertex set of $G$. The vertex-induction operation $s_{G}^{(\xi)}$ is a group isomorphism between $(\mathcal{P}(V_{G}),\Delta)$ and $(\mathcal{S}_{G},\cdot)$, i.e.% \begin{equation} (\mathcal{P}(V_{G}),\Delta)\overset{s_{G}^{(\xi)}}{\sim}\left( \mathcal{S}% _{G},\cdot\right) , \end{equation} where $\Delta$ is the symmetric difference operation. \begin{proof} See Appendix \ref{sec::graph_states_proofs}. \end{proof} \end{proposition} The summation operation maps the stabilizer group $\mathcal{S}_{G}$ to its stabilized space, i.e. the density matrix of the graph state $|G\rangle$ \cite{GraphStateReviews2006}, \begin{equation} \mathcal{S}_{G}\overset{\Sigma}{\longrightarrow}|G\rangle\langle G|=\frac {1}{2^{n}}\sum_{s\in S_{G}}% s.\label{eq::graph_state_density_matrix_n_stabilizers}% \end{equation} Hence there also exists an operation mapping the group $\mathcal{P}(V_{G})$ to graph states% \begin{equation} \mathcal{P}(V_{G})\overset{\Sigma\circ s_{G}^{(\xi)}}{\longrightarrow }|G\rangle\langle G|=\frac{1}{2^{n}}\sum_{\xi\subseteq V_{G}}s_{G}^{(\xi)} =\prod_{i=1}^{n}\frac{1+g_{i}}{2}. \label{eq::stabilizer_projected_graph_state \end{equation} \par This is a well-known representation of graph states \cite{GraphStateReviews2006}. The representation of a graph state in the computational Z-basis $|i_{Z}\rangle$ \cite{VandenNest2005-PhDthesis} is given by \begin{equation} \mathcal{P}(V_{G})\overset{\Sigma\circ s_{G}^{(\xi)}\text{ in }|i_{Z}\rangle }{\longrightarrow}|G\rangle=\frac{1}{2^{n/2}}\sum_{i\in\{0,1\}^{\otimes n}% }\left( -1\right) ^{\left\langle i,i\right\rangle _{A_{G}}}|i_{Z}% \rangle.\label{eq::graph_states_in_Z-basis}% \end{equation} Here $\sigma_{z}^{\otimes n}|i_{Z}\rangle=\left( -1\right) ^{\left\vert i\right\vert }|i_{Z}\rangle$, where $\left\vert i\right\vert $ is the Hamming weight of $i$. \hiddengls{adjMatG} $A_{G}$ is the adjacency matrix of the graph $G$, and $\braket{i,i}_{A_{G}}=(i_{1},...,i_{n})A_{G}(i_{1},...,i_{n})^{\mathrm{T}}% $. For all graph states with $n$ vertices, the probability amplitudes of Z-basis states $\langle i_{Z}|G\rangle$ are homogenously distributed for all $|i_{Z}\rangle$ up to a phase $-1$, i.e. $|\braket{i_Z|G}|=1/2^{n/2}$. Therefore graph states with the same vertex set all have the equivalent probability distribution of local $\sigma_Z$-measurement outcomes. This means that the Z-basis representation conceals the inner structure of graph states. \bigskip \par Different from the Z-basis, the representation of graph states in the computational X-basis $|i_{X}\rangle$ ( i.e. $\sigma_{X}^{\otimes n}|i_{X}\rangle=\left( -1\right) ^{\left\vert i\right\vert }|i_{X}\rangle$) reveals the structure of graph states to a certain degree. One aim in this paper is to find an efficient algorithm, i.e. a mapping from $\mathcal{P}(V_{G})$ to $|G\rangle$, to represent graph states in the computational X-basis:% \begin{equation} \mathcal{P}(V_{G})\overset{?}{\longrightarrow}|G\rangle \text{ in }|i_{X}\rangle. \label{eq::aim_of_paper}% \end{equation} In the rest of the paper, we denote the X-basis $|i_{X}\rangle$ as $|i\rangle$; that is, $|0\rangle=|+_{Z}\rangle=(\ket{0_Z}+\ket{1_Z})/\sqrt{2}$ and $|1\rangle=|-_{Z}\rangle =(\ket{0_Z}-\ket{1_Z})/\sqrt{2}$. \section{X-chains and their properties} \label{sec::X-chains} The commutativity of the measurement setting with graph state stabilizers determines whether one can obtain information about a graph state in the laboratory. Graph state stabilizers that commute with $\sigma_{X}$-measurements are the stabilizers consisting of solely $\sigma_{X}$ operators. They are the key ingredient in the representation of graph states in the X-basis. We will call the vertex sets $\xi$ inducing such configurations \emph{X-chains} of graph states. In this section, the concept of X-chains will be introduced and their properties will be investigated. \bigskip \par The number of $\sigma_Z$-operators in the graph state stabilizer $s_{G}^{(\xi)}$ depends on the neighborhoods within the vertex set $\xi$. If a vertex $v$ has an even number of neighbors within $\xi$, then the Pauli operator $\sigma_{Z}^{(v)}$ appears an even number of times in $s_{G}^{(\xi)}$, such that the product becomes the identity. Therefore to find the X-chain configurations of graph states, one needs to study the symmetric difference of neighborhoods within the vertex set $\xi$, which we define as the \emph{correlation index} of $\xi$ as follows. \begin{definition}[Correlation index] \label{def::correlation_index} \Needspace*{6\baselineskip}% \hiddengls{corrIndex} Let $\xi$ be a vertex subset of a graph $G$. Its correlation index is defined as the symmetric difference of neighbourhoods within $\xi$,% \begin{equation} c_{\xi}:=N_{v_{1}}\Delta N_{v_{2}}\cdots\Delta N_{v_{k}},% \end{equation} where $N_{v_i}$ is the neighbourhood of $v_i$ and $\xi=\{v_1,...,v_k\}$. \end{definition}% \noindent The name ``correlation index'' will become clearer in Theorem \ref{theorem::graph_states_as_X-factorized_states} and refers to the fact that for vanishing correlation index the corresponding stabilized state is factorized. (These states are called X-chain states in Def. \ref{def::X-chain_states_K-correlation_states}.) Note that the set $c_\xi$ occurs as an ``index'' for the $\sigma_Z$ operator of the induced stabilizer $s_G^{(\xi)}$ (see Proposition \ref{prop::induced_stabilizer_math_formula}). \par Besides the correlation index, due to the anticommutativity of $\sigma_{X}$ and $\sigma_{Z}$, the graph state stabilizers also depend on the so-called \emph{stabilizer parity} of $\xi$. \begin{definition}[Stabilizer parity] \label{def::stabilizer_parity} \Needspace*{6\baselineskip}% \hiddengls{stabParity}% Let $\xi$ be a vertex subset of a graph $G$. Its stabilizer parity in $\ket{G}$ is defined as the parity of the edge number $\left\vert E_{G[\xi]}\right\vert $ of the $\xi$-induced subgraph $G[\xi]$ \begin{equation} \pi_{G}\left( \xi\right) :=(-1)^{\left\vert E_{G[\xi]}\right\vert}.% \label{eq::G-parity_formula}% \end{equation} \end{definition}% \noindent The stabilizer parity of $\xi$, $\pi_{G}\left( \xi\right)$ is positive if the edge number $E(G[\xi])$ is even, otherwise it is negative. The explicit form of the induced stabilizers is given in the following proposition. \begin{proposition} [Form of the induced stabilizer]% \label{prop::induced_stabilizer_math_formula} \Needspace*{6\baselineskip}% Let $\xi$ be a vertex subset of a graph $G$. The $\xi$-induced stabilizer (see Def. \ref{def::induced_stabilizer}) of a graph state $|G\rangle$ is given by \begin{equation} s_{G}^{(\xi)}=\pi_{G}\left( \xi\right) \sigma_{X}^{(\xi)}\sigma_{Z}% ^{(c_{\xi})},\label{eq::induced_stabilizer_math_formula}% \end{equation} where $c_{\xi}$ is the \emph{correlation index} of $\xi$ and $\pi_{G}\left(\xi\right)$ is the stabilizer parity of $\xi$. \begin{proof} See Appendix \ref{sec::graph_states_proofs}. \end{proof} \end{proposition} \begin{figure*}[t!] \subfloat[]{ \includegraphics[width=0.15\textwidth]{S3_labeled} \label{fig::S3_labeled} } \subfloat[]{ \includegraphics[width=0.8\textwidth]{S3_correlation_index_and_X-resources_factorization} \label{fig::correlation_index_and_x-resource_InStr} \label{fig::S3_X-chain_illustation} } \newline \subfloat[]{ \begin{tabular}[c]{|c|c|c|c|c|} \toprule $\xi$ & $\emptyset$ and $\{2,3\}$ & $\{2\}$ and $\{3\}$ & $\{1\}$ and $\{1,2,3\}$ & $\{1,2\}$ and $\{1,3\}$ \\ \colrule $c_{\xi}\in\mathcal{C}_G$ & $\emptyset$ & $\{1\}$ & $\{2,3\}$ & $\{1,2,3\}$\\ \colrule $|E_{G[\xi]}|$ & $0$ & $0$ & $0$ and $2$ & $1$ \\ \colrule $\pi_{G}\left( \xi\right) $ & $1$ & $1$ & $1$ & $-1$ \\ \colrule $s_{G}^{(\xi)}$ & $\id$, $\sigma_{X}^{\{2,3\}}$ & $\sigma_{X}^{\{2\}}\sigma_{Z}^{\{1\}}$, $\sigma_{X}^{\{3\}}\sigma_{Z}^{\{1\}}$ & $\sigma_{X}^{(1)}\sigma_{Z}^{\{2,3\}}$, $\sigma_{X}^{\{1,2,3\}}\sigma_{Z}^{\{2,3\}}$ & $-\sigma_{x}^{\{1,2\}}\sigma_{Z}^{\{1,2,3\}}$, $-\sigma_{X}^{\{1,3\}}\sigma_{Z}^{\{1,2,3\}}$ \\ \colrule $\xi\in\langle\mathcal{K}_G\rangle$ & $\emptyset$ & $\{2\}$ & $\{1\}$ & $\{1,2\}$ \\ \colrule & \multicolumn{4}{c|}{ $\Gamma_G=\{\{2,3\}\}$, $\mathcal{K}_G=\{\{1\}, \{2\}\}$ }\\ \botrule \end{tabular} \label{fig::correlation_index_and_x-resource_table} } \caption{\colorfig Correlation indices and X-resources: (a) $3$-vertex star graph. (b) The mapping from X-resources to correlation indices is illustrated in the incidence structure \cite{Rosen1999combinatorial} of the graph $S_3$. The upper line is the correlation index, while the lower line are the vertex subsets (including the empty set). The arrows go from lower vertex subsets $\xi$ to the upper vertices corresponding to the nonzero entries of their correlation index $c_{\xi}$. For example, the vertex set $\{1,2,3\}$ points to the vertices $\{2,3\}$, indicating that the correlation index of $\{1,2,3\}$ is $c_{\{1,2,3\}}=\{2,3\}$. In particular, the vertex set $\emptyset$ and $\{2,3\}$ are X-chains (see Def. \ref{def::X-chain}), since their correlation index is $\emptyset$. The resources in the sets $\mathcal{X}_G^{(\emptyset)}=\{\emptyset,\{2,3\}\}$, $\mathcal{X}_G^{\{1\}}=\{\{2\},\{3\}\}$, $\mathcal{X}_G^{\{2,3\}}=\{\{1\},\{1,2,3\}\}$ and $\mathcal{X}_G^{\{1,2,3\}}=\{\{1,2\},\{1,3\}\}$ are all ``connected'' by $\{2,3\}$ via the symmetric difference operation $\Delta$. (c) Grouping of vertex subsets according to the correlation index. $\Gamma_G$ and $\mathcal{K}_G$ are the X-chain group generators and correlation group generators, respectively. } \label{fig::correlation_index_and_x-resource} \end{figure*} Let us illustrate these concepts with an example. The star graph state $|S_{3}\rangle$ is shown in Fig. \ref{fig::S3_labeled}. Its stabilizers can be represented in the following binary matrix: \[ \left( \begin{array} [c]{ccc|ccc}% 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 1\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 1 & 0 & 0\\ 1 & 1 & 0 & 1 & 1 & 1\\ 1 & 0 & 1 & 1 & 1 & 1\\ 0 & 1 & 1 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 1 & 1 \end{array} \right) , \] in which each row represents a stabilizer. The bit strings on the left hand side of the divider are the possible vertex sets $\xi$ occurring as a superscript for the Pauli $\sigma_{X}$ operators in Eq. \eqref{eq::induced_stabilizer_math_formula}, while the right hand side is their correlation indices $c_{\xi}$ occurring as a superscript for the Pauli $\sigma_{Z}$ operators. This is the so-called binary representation of graph states \cite{Gottesman1997-phdQEC, Gottesman1996-QECSQHB, CalderbankRSSloane1997-QECOrthGeo}. We interpret this binary representation as an incidence structure \cite{Rosen1999combinatorial} in Fig. \ref{fig::correlation_index_and_x-resource_InStr}, in which the vertex sets $\xi$ are depicted as the nodes in the lower row, while the upper row interprets the correlation indices $c_{\xi}$ . In the example of $|S_{3}\rangle$, one observes that the correlation indices $c_{\xi}$ do not cover all possible $3$-bit binary numbers. The vertex subsets are regrouped according to their correlation indices in Fig. \ref{fig::correlation_index_and_x-resource_table}. The concept of regrouping is introduced via the definition of the so-called \emph{X-resources} as follows. \begin{definition} [X-resources of correlation indices]% \label{def::X-resouces_of_correlation_indices} \Needspace*{8\baselineskip} We denote the set of \emph{correlation indices} of a graph $G$ as% \[ \mathcal{C}_{G}:=\left\{ c_{G}(\xi):\xi\subseteq V_{G}\right\} . \] If a vertex set $\xi$ has correlation index $c$, i.e. $c_{G}(\xi)=c$, then we call $\xi$ an \emph{(X-)resource of $c$-correlation} in $G$. The \emph{(X-)resource set of $c$-correlation} is written as \begin{equation} \mathcal{X}_{G}^{\left( c\right) }:=\left\{ \xi\subseteq V_{G}:c_{G}(\xi)=c\right\} . \end{equation} \hiddengls{corrIndexSet}\hiddengls{XResourceSet}% \end{definition}% \noindent Since in the example of $\ket{S_3}$ the correlation index of $\{2,3\}$ is $\emptyset$, each correlation index $c\in\mathcal{C}_{S_3}$ has two X-resources $\xi^{(c_1)}$ and $\xi^{(c_2)}$ with $\xi^{(c_1)}=\xi^{(c_2)}\Delta\{2,3\}$. The number of X-resources of $\ket{S_3}$ is $2^3$. Therefore the graph state $|S_{3}\rangle$ generates $4$ correlation indices corresponding to $4$ binary numbers. The other $4$ correlation indices are excluded due to the existence of the non-trivial $\emptyset$-correlation resource $\{2,3\}$. This non-trivial $\emptyset$-correlation resource decreases the correlations of the graph state in the X-basis. Explicitly a non-trivial $\emptyset$-correlation resource induces a stabilizer consisting solely of $\sigma_{X}$ operators as follows. \[ s_{G}^{(\xi)}=\pi_{G}\left( \xi\right) \sigma_{X}^{(\xi)}\text{, for all }\xi\in\mathcal{X}_{G}^{\left( \emptyset\right) }. \] We will call such vertex sets \emph{X-chains}. \begin{definition} [X-chains]\label{def::X-chain} \Needspace*{3\baselineskip} Let $|G\rangle$ be a graph state. An \emph{X-resource of $\emptyset$-correlation} in $G$ is called an \emph{X-chain} of $G$. The set of all X-chains is denoted as $\mathcal{X}_{G}^{\left( \emptyset\right) }$. \hiddengls{Xchain} \end{definition} The X-chains of a graph state $\ket{G}$ can be determined efficiently with standard linear algebra tools \cite{WuKampermannBruss2015-XChainAlgo}. As examples, the X-chains of the graph states $|S_{3}\rangle$, $|S_{4}\rangle$ and $|C_{3}\rangle$ are given in Table \ref{table::x-chain_group_of_graph_state_eg}. Besides, the X-chains for certain types of graph states (i.e. linear graph states $\ket{L_n}$, cycle graph states $\ket{C_n}$, complete graph states $\ket{K_n}$ and star graph states $\ket{S_n}$) are studied in \cite{WuKampermannBruss2015-XChainAlgo}. A MATHEMATICA package is provided for finding X-chains in general graph states; see Supplemental Material \cite{XchainMpackage_Wu}. \begin{table}[t] \input{tabular_examples_X-chains_n_scalar_product} \caption{X-chain groups of simple graphs: The directed graphs shown under the X-chains illustrate the criterion of X-chains. Once a vertex is selected in a vertex subset, one draws arrows from it to its neighbors. A vertex subset $\xi$ is an X-chain if and only if all vertices of the graph are incident by even number of arrows. The X-chain groups $\braket{\Gamma_G}$ are generated by their generating sets $\Gamma_G$.}% \label{table::x-chain_group_of_graph_state_eg}% \end{table} \par We point out that the X-chains form a group with the symmetric difference operation. \begin{lemma} [X-chain groups and correlation groups]% \label{lemma::X-chain_factorized_group} \Needspace*{15\baselineskip} Let $|G\rangle$ be a graph state. The set of X-chains together with the symmetric difference $(\mathcal{X}_{G}^{\left( \emptyset\right) },\Delta)$, is a normal subgroup of $\left( \mathcal{P}\left( V_{G}\right) ,\Delta\right)$. The quotient group $(\mathcal{P} \left( V_{G}\right) /\mathcal{X}_{G}^{\left(\emptyset\right)},\Delta)$ is identical to the set of all resource sets \begin{equation} \mathcal{P}\left( V_{G}\right) /\mathcal{X}_{G}^{\left( \emptyset\right) }=\left\{ \mathcal{X}_{G}^{\left( c\right) }:c\in\mathcal{C}_{G}\right\}, \end{equation} which we call call the \emph{correlation group} of $\ket{G}$. Let $\Gamma_{G}$ and $\mathcal{K}_{G}$ denote the generating sets of $(\mathcal{X}_{G}^{\left( \emptyset\right) },\Delta)$ and $(\mathcal{P}(V_{G})/\mathcal{X}_{G}^{(\emptyset)}, \Delta )$, respectively. The stabilizer group $\left( \mathcal{S}_{G},\cdot\right) $ is isomorphic to the direct product of the X-chain group and the correlation group, \begin{equation} \left( \mathcal{S}_{G},\cdot\right) \sim\left( \left\langle \Gamma _{G}\right\rangle ,\Delta\right) \times\left( \left\langle \mathcal{K}% _{G}\right\rangle ,\Delta\right) ,\label{eq::prop_X-chain_group_correlation_group}% \end{equation} As a result, the graph state $|G\rangle$ is the product of the X-chain group and correlation group inducing stabilizers, i.e.% \begin{equation} |G\rangle\langle G|=\prod_{\kappa\in\mathcal{K}_{G}}\frac{1+s_{G}^{(\kappa)}% }{2}\prod_{\gamma\in\Gamma_{G}}\frac{1+s_{G}^{(\gamma)}}{2}% .\label{eq::X-chain_factorization_of_graph_state_projector}% \end{equation} \hiddengls{XchainGroup}\hiddengls{corrGroupG}\hiddengls{corrGroup} \begin{proof} See Appendix \ref{sec::graph_states_proofs}. \end{proof} \end{lemma} Note that the brackets $\braket{\Gamma_G}$ and $\braket{\mathcal{K}_G}$ denote the group generated by $\Gamma_G$ and $\mathcal{K}_G$, respectively The correlation group represents the partition of the powerset of vertex set $\mathcal{P}( V_{G})$ regarding the correlation index of the vertex subsets $\xi\in\mathcal{P}( V_{G})$. The members in the correlation group $\xi\in\braket{\mathcal{K}_G}$ possess distinct correlation indices. All of the members in the $c$-correlation resource set $\xi\in\mathcal{X}^{(c)}$ are connected by X-chains. Let $\xi_{1}^{(c)}% \in\mathcal{X}^{(c)}$ and $\xi_{2}^{(c)}\in\mathcal{X}^{(c)}$ be two X-resources for the same correlation index $c$. Then there must exist an X-chain $\gamma\in\Gamma_{G}$, such that \begin{equation} \xi_{2}^{(c)}=\xi_{1}^{(c)}\Delta\gamma. \end{equation} For instance, in the example of $|S_{3}\rangle$ (Fig. \ref{fig::S3_X-chain_illustation}), the resources of correlation $i^{(c)}=111$ (i.e. $c=\left\{ 1,2,3\right\} $) are connected by the X-chain $\{2,3\}$, i.e. $\{1,3\}=\{1,2\}\Delta\{2,3\}$. Therefore, one can choose one member in $\mathcal{X}^{(c)}$ to represent the whole resource set $\mathcal{X}^{(c)}$. Hence, after the X-chain factorization the group $\left( \mathcal{P}\left( V_{G}\right) ,\Delta\right) $ for $S_{3}$ becomes $\left\langle \mathcal{K}_{G}\right\rangle =\left\{ \emptyset,\left\{ 1\right\} ,\left\{ 2\right\} ,\left\{ 1,2\right\} \right\} $ with $\mathcal{K}_{G}=\left\{ \left\{ 1\right\} ,\left\{ 2\right\} \right\} $ \par In Eq. (\ref{eq::X-chain_factorization_of_graph_state_projector}), the Hilbert space $\mathbb{H}_G$ of the graph state $\ket{G}$ is first projected onto the subspace stabilized by the stabilizers $s_{G}^{(\gamma)}$ with $\gamma\in\Gamma_{G}$. It is the subspace, $\mathrm{span}\left( \Psi_{\emptyset}\right) $, spanned by the stabilized states $|\psi_{\emptyset}\rangle$ with% \begin{equation} \Psi_{\emptyset}:=\left\{ |\psi_{\emptyset}\rangle:s_{G}^{(\gamma)}% |\psi_{\emptyset}\rangle=|\psi_{\emptyset}\rangle\text{, for all }\gamma \in\Gamma_{G}\right\} . \end{equation} In this projection, $\ket{\psi_{\emptyset}}$ are all product states, since every X-chain stabilizer $s_{G}^{(\gamma)}$ commutes with the $\sigma^{\otimes n}_{X}$ operator. After the first projection, the graph state is then obtained via projecting the subspace $\mathrm{span}\left( \Psi_{\emptyset}\right) $ into the state that is stabilized by the stabilizers $s_{G}^{(\kappa)}$ induced by the correlation group; that is,% \begin{equation} \mathbb{H}_{G}\overset{\Gamma_{G}}{\longrightarrow}\Psi_{\emptyset}% \overset{\mathcal{K}_{G}}{\longrightarrow}|G\rangle. \end{equation} This approach will be employed in the next section to derive the representation of graph states in the X-basis. \section{X-chain factorization of graph states} \label{sec::X-chain_factorization_of_graph_states} We express $|G\rangle$ in the X-basis as $|G\rangle=\sum_{i=1}^{2^n}\alpha_{i} |i_X\rangle$, with $\sum_i|\alpha_{i}|^2=1$. Since the X-chain stabilizers $s_{G}^{(\gamma)}$ stabilize $|G\rangle$, it holds that \begin{equation} \sum\alpha_{i}|i_{X}\rangle=\sum \alpha_{i} s_{G}^{(\gamma)}|i_{X}\rangle. \label{eq::no_name_1} \end{equation} Since $s_{G}^{(\gamma)}$ solely contains $\sigma_{X}$-operators, $s_{G}% ^{(\gamma)}|i_{X}\rangle=\pm|i_{X}\rangle$. In order to fulfill Eq. \eqref{eq::no_name_1}, however, it follows that only the plus sign is possible, i.e. \begin{equation} s_{G}^{(\gamma)}|i_{X}\rangle=|i_{X}\rangle\text{ for all }\alpha_{i}\not =0. \end{equation} That means that the possible X-measurement outcomes are solely those X-basis states $|i_{G}\rangle$, which are stabilized by all X-chain stabilizers $s_{G}^{(\gamma)}$. A graph state $|G\rangle$ is hence a superposition of such particular X-basis states. \par For example, the star graph state $|S_{3}\rangle$ in Fig. \ref{fig::correlation_index_and_x-resource} is stabilized by the X-chain stabilizer $s_{S_{3}}^{\{2,3\}}=\sigma_{X}^{\{2,3\}}$. Therefore $\ket{S_3}$ belongs to the space spanned by the states stabilized by $s_{S_{3}}^{\{2,3\}}$. From the table in Fig. \ref{fig::correlation_index_and_x-resource}, one observes that the X-basis $|i^{(c)}\rangle$, with $c\in\mathcal{C}_{S_{3}}$ (see Def. \ref{def::X-resouces_of_correlation_indices}) corresponding to the correlation indices of $|S_{3}\rangle$, are stabilized by $s_{S_{3}}^{\{2,3\}}$, i.e. $\sigma_{X}^{(\{2,3\})}\ket{i^{(c)}}=\ket{i^{(c)}}$ for all $c\in\mathcal{C}_{S_{3}}$. That means $|S_{3}\rangle$ belongs to the subspace, $\mathrm{span}(\Psi)$, spanned by $\Psi=\left\{ |i^{(c)}\rangle,c\in\mathcal{C}_{S_{3}}\right\}=\left\{ |000\rangle,|100\rangle,|011\rangle,|111\rangle\right\} $. Thus $|S_{3}\rangle$ can be represented solely in $4$ X-basis states instead of $8$ $Z$-basis states. \bigskip \par In this section, we will derive a general mapping from the X-chain group and correlation group to graph states in the X-basis. This is the question we raised in section \ref{sec::representation_of_graph_states}. We first introduce X-chain states and $\mathcal{K}% $-correlation states (Definition \ref{def::X-chain_states_K-correlation_states}), which span the subspace stabilized by X-chain stabilizers and $\mathcal{K}$-correlation stabilizers, respectively. Given the explicit form of the X-chain states and correlation states in the X-basis (Proposition \ref{prop::X-chain_states_in_X-basis} and \ref{prop::correlation_states_explicit_form}), one arrives at the X-chain factorization representation of graph states in Theorem \ref{theorem::graph_states_as_X-factorized_states}. \begin{definition} [X-chain states and correlation states]% \label{def::X-chain_states_K-correlation_states} \Needspace*{8\baselineskip}% Let $|G\rangle$ be a graph state with the X-chain group $\left\langle \Gamma_{G}\right\rangle $ and the correlation group $\left\langle \mathcal{K}_{G}\right\rangle $. We define the X-basis state $|i^{(x_{\Gamma_G})}\rangle$ (in short form $|i^{(x_{\Gamma})}\rangle$) as the state stabilized by the Pauli $\sigma_{X}$ operators such that \begin{enumerate} \item \label{item::def_X-chain_state_condition_1} $\pi_{G}\left( \gamma\right) \sigma_{X}^{(\gamma)}|i^{(x_{\Gamma})}\rangle=|i^{(x_{\Gamma})}\rangle$, for all $\gamma\in\Gamma_{G}$, \item \label{item::def_X-chain_state_condition_2} $\sigma_{X}^{(\kappa )}|i^{(x_{\Gamma})}\rangle=|i^{(x_{\Gamma})}\rangle$, for all $\kappa \in\mathcal{K}_{G}$. \end{enumerate} The local unitary transformed states \begin{equation} |\psi_{\emptyset}(\xi)\rangle=s_{G}^{(\xi)}\left\vert i^{(x_{\Gamma}% )}\right\rangle ,\xi\in\braket{\mathcal{K}_G}\label{eq::def_X-chain_state}% \end{equation} are called \emph{X-chain states}. Let $\braket{\mathcal{K}}\subseteq\braket{\mathcal{K}_G}$ be a correlation subgroup, then a $\mathcal{K}$-\emph{correlation state} of graph state $|G\rangle$, $|\psi_{\mathcal{K}}\left( \xi\right) \rangle$, is defined as% \begin{equation} |\psi_{\mathcal{K}}\left( \xi\right) \rangle=s_{G}^{(\xi)}\prod_{\kappa \in\mathcal{K}}\frac{1+s_{G}^{(\kappa)}}{\sqrt{2}}|i^{(x_{\Gamma})}% \rangle\label{eq::def_correlation_states}% \end{equation} with $\xi\in\braket{\mathcal{K}_G}/\braket{\mathcal{K}}$. Let $\left\langle \mathcal{K}\right\rangle \mathcal{\mathcal{\subseteq}\left\langle \mathcal{K}^{\prime}\right\rangle \mathcal{\subseteq}}\left\langle \mathcal{K}_{G}\right\rangle $, where a set of $\mathcal{K}$-correlation states is denoted as% \begin{equation} \Psi_{\mathcal{K}^{\prime}}^{(\mathcal{K)}}=\left\{ |\psi_{\mathcal{K}% }\left( \xi\right) \rangle:\xi\in \braket{\mathcal{K'}}/\braket{\mathcal{K}}\right\} \label{eq::def_K_set_of_K-correlation_states}% \end{equation} \hiddengls{XchainState} \hiddengls{XchainStateBasic} \hiddengls{corrState} \hiddengls{corrStateSet} \end{definition} In this notation, the set of X-chain states is then written as $\Psi _{\mathcal{K}_{G}}^{(\emptyset)}$, or in short form as $\Psi^{(\emptyset)}$, while the set of all $\mathcal{K}$-correlation states is denoted by $\Psi_{\mathcal{K}% _{G}}^{(\mathcal{K)}}$, or in short form as $\Psi^{(\mathcal{K)}}$. Note that $|\psi_{\mathcal{\emptyset}}\left( \mathcal{\emptyset}\right) \rangle =|i^{(x_{\Gamma})}\rangle$, and X-chain states $|\psi_{\mathcal{\emptyset}% }\left( \xi\right) \rangle$ are $\mathcal{\emptyset}$-correlation states. The X-basis $|i^{(x_{\Gamma})}\rangle$ is the fundamental state from which the non-vanishing X-basis components in graph states can be derived. According to its definition, $|i^{(x_{\Gamma})}\rangle$ depends on the generating set of the X-chain group and the correlation group of a given graph state. One can employ the following approach to obtain the fundamental X-chain state $|i^{(x_{\Gamma})}\rangle$. \begin{proposition} [X-chain states in X-basis]\label{prop::X-chain_states_in_X-basis} \Needspace*{8\baselineskip} Let $|G\rangle$ be a graph state with the X-chain group $\left\langle \Gamma _{G}\right\rangle $ and the correlation group $\left\langle \mathcal{K}% _{G}\right\rangle $. Let $\Gamma_{G}=\left\{ \gamma_{1},\gamma_{2}% ,...\right\} $, and $\gamma_{i}=\left\{ v_{i_{1}},v_{i_{2}},\cdots\right\} $. The generating set $\Gamma_{G}$ and $\mathcal{K}_{G}$ can be chosen as \begin{enumerate} \item $\Gamma_G=\{\gamma_1,...,\gamma_k\}$ such that $\gamma_i\not\subseteq\gamma_j$ for all $\gamma_i, \gamma_j\in\Gamma_G$, \item $\mathcal{K}_{G}=\left\{ \left\{ v\right\} :v\in V_{G}\backslash \bigcup_{i=1}^{k}\left\{ v_{i_{1}}\right\} \right\} $. \end{enumerate} Here, the first element of $\gamma_i=\{v_{i_1},v_{i_2},...\}$ is selected in a way such that $v_{i_{1}}\neq v_{j_{1}}$ for all $i\not =j$. Then the X-chain state $\ket{\psi_{\emptyset}(\emptyset)}$ of $|G\rangle$ is an X-basis state, $|i^{(x_{\Gamma})}\rangle$, with% \begin{equation} x_{\Gamma}=\left\{ v_{i_{1}}:\pi_{G}\left( \gamma_{i}\right) =-1\right\} . \end{equation} \begin{proof} See Appendix \ref{sec::graph_states_proofs}. \end{proof} \end{proposition} \noindent The vertices $v_{i_{1}}$ are the key for the determination of $\ket{x_{\Gamma}}$. First of all, we choose the X-chain generators $\Gamma_{G}$, such that $\gamma_{i}\not \subseteq \gamma_{j}$, for all $\gamma_{i},\gamma_{j}\in \Gamma_{G}$. That means each X-chain generator possesses at least a vertex $v_{i_{1}}$ as its own exclusive vertex, i.e. $v_{i_{1}}\in\gamma _{i}\backslash\left( \cup_{j\not =i}\gamma_{j}\right) $. In other words, the vertex $v_{i_{1}}$ uniquely represents the X-chain generator $\gamma_{i}$. The correlation group generators are then chosen as the single vertex sets $\{v\}$ with $v\in V_G\setminus \bigcup_{i}\left\{ v_{i_{1}}\right\}$. At the end, the corresponding vertex set $x_{\Gamma}$ of the fundamental X-chain state $|i^{(x_{\Gamma})}\rangle$ is the set of $v_{i_{1}}$, whose X-chain generator $\gamma_{i}$ possesses a negative stabilizer-parity. Note that in general the choice of the X-chain generators $\Gamma_{G}$ is not unique, and therefore neither are the fundamental X-chain states $|i^{(x_{\Gamma})}\rangle$. However, the above mentioned approach still arrives to the same set $\Psi^{(\emptyset)}$ of X-chain states, since the X-chain group is unique. \begin{figure*}[t!] \centering \subfloat[]{ \includegraphics[width=0.2\textwidth]{K4N24} \label{fig::determination_of_X-chain_state_graph} } \subfloat[]{ \Xchaindiagram[ GState=|K_{4}^{\neg 1}\rangle, GammaG={\{\{1,2,3\},\{2,4\}\}}, GammaGNum=2, xG=1000, KappaG={\{\{2\},\{3\}\}}, KappaGNum=2, heightScale=2, widthScale=2, scale=0.5 ] \label{fig::determination_of_X-chain_state_XchainDiagram} } \\ \subfloat[]{ \begin{tabular}{C{0.4\textwidth}C{0.55\textwidth}} \includegraphics[width=0.95\linewidth]{K4N24_X-factorization_InStr_1} & \begin{tabular} [c]{|c|c|c|c|c|c|c|c|} \toprule $\gamma\in\Gamma_{G}$ & $\left\{ 1,2,3\right\} $ & $\left\{ 2,4\right\} $ & $\mathcal{K}_{G}$ & \multicolumn{4}{c|}{$\left\{ \left\{ 2\right\} ,\left\{ 3\right\} \right\} $}\\ \colrule $v_{i_{1}}$ & $1$ & $2$ & $\xi\in\left\langle \mathcal{K}_{G}\right\rangle $ & $\emptyset$ & $\left\{ 2\right\} $ & $\left\{ 3\right\} $ & $\left\{ 2,3\right\} $\\ \colrule $\pi_{G}\left( \gamma\right) $ & $-1$ & $1$ & $\pi_{G}\left( \xi\right) $ & $1$ & $1$ & $1$ & $-1$\\ \colrule $x_{\Gamma}$ & \multicolumn{2}{c|}{$\left\{ 1\right\} $} & $i^{(c_{\xi})}$ & $0$ & $1010$ & $1101$ & $0111$\\ \colrule $|i^{(x_{\Gamma})}\rangle$ & \multicolumn{2}{c|}{$|1000\rangle$} & $\ket{\psi_{\emptyset}\left( \xi\right)} $ & $|1000\rangle$ & $|0010\rangle$ & $|0101\rangle$ & $-|1111\rangle$\\ \botrule \end{tabular} \\ \end{tabular} \label{fig::determination_of_X-chain_state_GammaG_choice_1} } \\ \subfloat[]{ \begin{tabular}{C{0.4\textwidth}C{0.6\textwidth}} \includegraphics[width=0.95\linewidth]{K4N24_X-factorization_InStr_2} & \begin{tabular} [c]{|c|c|c|c|c|c|c|c|} \toprule $\gamma\in\Gamma_{G}$ & $\left\{ 2,1,3\right\} $ & $\left\{ 4,1,3\right\} $ & $\mathcal{K}_{G}$ & \multicolumn{4}{c|}{$\left\{ \left\{ 1\right\} ,\left\{ 3\right\} \right\} $}\\ \colrule $v_{i_{1}}$ & $2$ & $4$ & $\xi\in\left\langle \mathcal{K}_{G}\right\rangle $ & $\emptyset$ & $\left\{ 1\right\} $ & $\left\{ 3\right\} $ & $\left\{ 1,3\right\} $\\ \colrule $\pi_{G}\left( \gamma\right) $ & $-1$ & $-1$ & $\pi_{G}\left( \xi\right) $ & $1$ & $1$ & $1$ & $-1$\\ \colrule $x_{\Gamma}$ & \multicolumn{2}{c|}{$\left\{ 2,4\right\} $} & $i^{(c_{\xi})}$ & $0$ & $0111$ & $1101$ & $1010$\\ \colrule $|i^{(x_{\Gamma})}\rangle$ & \multicolumn{2}{c|}{$|0101\rangle$} & $\ket{\psi_{\emptyset}\left( \xi\right)} $ & $|0101\rangle$ & $|0010\rangle$ & $|1000\rangle$ & $-|1111\rangle$\\ \botrule \end{tabular} \\ \end{tabular} \label{fig::determination_of_X-chain_state_GammaG_choice_2} } \\ \caption{\colorfig Example for the determination of X-chain states (see section \ref{sec::X-chain_factorization_of_graph_states} for details). (a) The graph state $|K_{4}^{\neg 1}\rangle$. (b) The factorization diagram of $|K_{4}^{\neg 1}\rangle$ (for an explanation, see Algorithm \ref{algo::factorization_diagram_graph_states}). (c),(d) The incidence structure of the X-chain generators and correlation group generators of $|K_{4}^{\neg 1}\rangle$. The choices of these generators are not unique, and lead to different fundamental X-chain states $\ket{i^{(x_{\Gamma})}}$. However, they arrive at the identical set of X-chain states $\{\ket{\psi_{\emptyset}(\xi)}, \xi\in\braket{\mathcal{K}_G}\}$. } \label{fig::determination_of_X-chain_states} \end{figure*} \par Let us illustrate these concepts by an example, the graph state $|K_{4}^{\neg 1}\rangle$ (Fig. \ref{fig::determination_of_X-chain_state_graph}), which corresponds to the graph with one edge missing from the complete graph $K_{4}$. Its X-chain generators can be chosen as $\Gamma_{G}=\left\{ \gamma_{1},\gamma_{2}\right\} =\left\{ \left\{ 1,2,3\right\} ,\left\{ 2,4\right\}\right\} $ (see Fig. \ref{fig::determination_of_X-chain_state_GammaG_choice_1}). The exclusive vertex $v_{1}$ for $\gamma_{1}$ can be chosen as $1$, while $v_{2}$ for $\gamma_{2}$ is $4$. Only $\gamma_{1}$ has negative parity, and therefore $x_{\Gamma}=\left\{ 1\right\} $ and the fundamental X-chain state is $\ket{i^{(x_{\Gamma})}}=|1000\rangle$. \par From the fundamental X-chain state $\ket{i^{(x_{\Gamma})}}$ one can derive all the X-chain states and correlation states with the following proposition. \begin{proposition} [Form of X-chain states, $\mathcal{K}$-correlation states]% \label{prop::correlation_states_explicit_form} \Needspace*{8\baselineskip} Let $\xi\in\left\langle \mathcal{K}_{G}\right\rangle $ be an X-resource and $\left\langle \mathcal{K}\right\rangle \subseteq\left\langle \mathcal{K}_{G}\right\rangle $. An X-chain state is given as \begin{equation} |\psi_{\emptyset}\left( \xi\right) \rangle=\pi_{G}\left( \xi\right) \left\vert i^{(x_{\Gamma})}\oplus i^{(c_{\xi})}\right\rangle, \end{equation} where $\pi_{G}\left( \xi\right) $ is the stabilizer parity of $\xi$ (see Eq. \eqref{eq::G-parity_formula}), and $c_{\xi}$ is the correlation index of $\xi$. A $\mathcal{K}$-correlation state is the superposition of \emph{X-chain states},\emph{ } \begin{equation} |\psi_{\mathcal{K}}\left( \xi\right) \rangle=\frac{1}{2^{\left\vert \mathcal{K}\right\vert /2}}\sum_{\xi^{\prime}\in\left\langle \mathcal{K}% \right\rangle }|\psi_{\emptyset}\left( \xi\Delta\xi^{\prime}\right) \rangle. \label{eq::correlation_state_explicit_form} \end{equation} \begin{proof} See Appendix \ref{sec::graph_states_proofs}. \end{proof} \end{proposition} \noindent According to this proposition, the X-chain states of $\ket{K_4^{\lnot1}}$ derived from $\ket{i^{(x_\gamma)}}=\ket{1000}$ are given in the table in Fig. \ref{fig::determination_of_X-chain_state_GammaG_choice_1}. Alternatively, one can also choose the X-chain generators $\Gamma_{G}=\left\{ \gamma_{1}% ,\gamma_{2}\right\} =\left\{ \left\{ 2,1,3\right\} ,\left\{ 4,1,3\right\} \right\} $ (see Fig. \ref{fig::determination_of_X-chain_state_GammaG_choice_2}). In this case $v_{1}=2$ and $v_{2}=4$. The parities of $\gamma_{1}$ and $\gamma_{2}$ are both negative, hence $|i^{(x_{\Gamma})}\rangle=|0101\rangle$. However, the sets of obtained X-chain states $\Psi^{(\emptyset)}$ are identical in both cases. \par The correlation states $|\psi_{\mathcal{K}}\left( \xi\right) \rangle$ are then the superposition of their corresponding X-chain states. For example, in Fig. \ref{fig::determination_of_X-chain_state_GammaG_choice_1} the correlation state $|\psi_{\{\{2,3\}\}}\left( \emptyset\right) \rangle=(\ket{1000}-\ket{1111})/\sqrt{2}$. The correlation states have the following properties. \begin{corollary} [Properties of $\mathcal{K}$-correlation states]% \label{coro::properties_of_correlation_states} \Needspace*{8\baselineskip} Let $\braket{\mathcal{K}}\subseteq \braket{\mathcal{K}_G}$ be a correlation index subgroup, then \begin{enumerate} \item \label{item::correlation_states_property_1} $|\psi_{\mathcal{K}}\left( \xi\right) \rangle$ is stabilized by all stabilizers $s_{G}^{(\kappa)}$ with $\kappa\in\left\langle \Gamma_{G}\right\rangle \times\left\langle \mathcal{K}\right\rangle $% \begin{equation} s_{G}^{(\kappa)}|\psi_{\mathcal{K}}\left( \xi\right) \rangle=|\psi _{\mathcal{K}}\left( \xi\right) \rangle. \end{equation} Therefore the space $\mathrm{span}(\Psi^{(\mathcal{K})})$, see Eq. \eqref{eq::def_K_set_of_K-correlation_states}, is also stabilized by $s_{G}^{(\kappa)}$ with $\kappa\in\left\langle \Gamma_{G}\right\rangle \times\left\langle \mathcal{K}\right\rangle $. \item \label{item::correlation_states_property_2} For $\xi_{1}\in\left\langle \mathcal{K}_{G}\right\rangle $ and $\xi_{1}\not \in \left\langle \mathcal{K}\right\rangle $, it holds that% \begin{equation} s_{G}^{(\xi_{1})}|\psi_{\mathcal{K}}\left( \xi_{2}\right) \rangle =|\psi_{\mathcal{K}}\left( \xi_{1}\Delta\xi_{2}\right) \rangle. \end{equation} \item \label{item::correlation_states_property_3} For $\kappa\in \mathcal{K}_{G}$ and $\kappa\not \in \mathcal{K}$, the $\mathcal{K }\cup\left\{ \mathcal{\kappa}\right\} $-correlation state can be obtained by% \begin{equation} |\psi_{\mathcal{K}\cup\{\kappa\}}\left( \xi\right) \rangle=\frac {1+s_{G}^{(\kappa)}}{\sqrt{2}}|\psi_{\mathcal{K}}\left( \xi\right) \rangle. \end{equation} \end{enumerate} \begin{proof} See Appendix \ref{sec::graph_states_proofs}. \end{proof} \end{corollary} With these properties one can derive the representation of graph states in the X-basis. \begin{theorem} [X-chain state representation of graph states]% \label{theorem::graph_states_as_X-factorized_states} \Needspace*{8\baselineskip} Let $|G\rangle$ be a graph state. Then $|G\rangle$ is a $\mathcal{K}_{G}$-correlation state, which is a superposition of X-chain states $|\psi_{\emptyset}\left( \xi\right) \rangle $, i.e.% \begin{equation} |G\rangle=|\psi_{\mathcal{K}_{G}}\rangle=\frac{1}{2^{\left\vert \mathcal{K}% _{G}\right\vert /2}}\sum_{\xi\in\left\langle \mathcal{K}_{G}\right\rangle }|\psi_{\emptyset}\left( \xi\right) \rangle. \label{eq::graph_state_in_X_basis} \end{equation} \begin{proof} According to property \ref{item::correlation_states_property_1} in Corollary \ref{coro::properties_of_correlation_states}, one can infer that $\ket{\psi_{\mathcal{K}_{G}}}$ is stabilized by all graph state stabilizers $s_{G}^{(\xi)}$ with $\xi\in\left\langle \Gamma_{G}\right\rangle \times\left\langle \mathcal{K}_{G}\right\rangle $. As a result of Lemma \ref{lemma::X-chain_factorized_group}, $\ket{\psi_{\mathcal{K}_{G}}}$ is stabilized by the whole graph state stabilizer group $\mathcal{S}_{G}$. According to the definition of graph states in the stabilizer formalism, one can infer that $|G\rangle =\ket{\psi_{\mathcal{K}_{G}}}$. The explicit form of $\ket{\psi_{\mathcal{K}_{G}}}$ in Eq. \eqref{eq::graph_state_in_X_basis} is obtained by Proposition \ref{prop::correlation_states_explicit_form}. \end{proof} \end{theorem} \par Note that the graph state obtained by this theorem may differ from the real one by a global phase $-1$, i.e. $\ket{G}=-\ket{\psi_{\mathcal{K}_G}}$ \footnote{This global phase can be corrected by the sign of the sum of the parities of all X-resources in the correlation group $\braket{\mathcal{K}_G}$, $\alpha=Sign(\sum_{\xi\in\braket{\mathcal{K}_G}}(\pi_G(\xi)))$, i.e. $\ket{G}=\alpha\ket{\psi_{\mathcal{K}_G}}$.}. We summarize the approach of X-chain factorization of a graph state representation in a so-called factorization diagram. \begin{algorithm} [Factorization diagram]\label{algo::factorization_diagram_graph_states} \Needspace*{8\baselineskip} The X-chain factorization of graph states can be described in the \emph{factorization diagram} shown in Fig. \ref{fig::factorization_diagram_graph_states}. \begin{enumerate} \item One decomposes the group $\mathcal{P}(V_G)$ into the direct product of the X-chain group $\braket{\Gamma_G}$ and the correlation group $\braket{\mathcal{K}_G}$ (Lemma \ref{lemma::X-chain_factorized_group}). \item From the X-chain group $\braket{\Gamma_G}$, one obtains the set of X-chain states $\Psi^{\emptyset}_{\mathcal{K}_G}$ (Proposition \ref{prop::X-chain_states_in_X-basis}). \item From the correlation group $\braket{\mathcal{K}_G}$, one obtains graph states via the superposition of the X-chain states in $\Psi^{\emptyset}_{\mathcal{K}_G}$ (Theorem \ref{theorem::graph_states_as_X-factorized_states}). \end{enumerate} \end{algorithm} \begin{figure}[ht!] \centering \Xchaindiagram \caption{\colorfig X-chain factorization diagram of graph states: A graphical summary of Proposition \ref{prop::X-chain_states_in_X-basis}, \ref{prop::correlation_states_explicit_form} and Theorem \ref{theorem::graph_states_as_X-factorized_states}. This diagram illustrates the algorithm for representing a graph state in the X-basis.} \label{fig::factorization_diagram_graph_states} \end{figure} \noindent The arrows in the factorization diagram can be interpreted as a mapping from the sets of X-resources to their corresponding stabilized Hilbert subspaces. As we already discussed at the end of the section \ref{sec::representation_of_graph_states}, a graph state is mapped from the powerset of vertices by stabilizer induction, which is depicted in the left hand side of the equality in the diagram. The equation in the first row is the X-chain factorization of the group $(\mathcal{P}(V_{G}),\Delta)$ (Lemma \ref{lemma::X-chain_factorized_group}). The arrow from the X-chain group $\Gamma_{G}$ to the X-chain states $\Psi^{(\emptyset)}$ interprets the mapping from the X-chain group to the stabilized subspace spanned by $\Psi^{(\emptyset)}$ (Definition \ref{def::X-chain_states_K-correlation_states} and Proposition \ref{prop::X-chain_states_in_X-basis}). The arrow from the correlation group $\langle\mathcal{K}_{G}\rangle$ through the X-chain states $\Psi^{(\emptyset)}$ to the $\mathcal{K}_{G}$-correlation state is a mapping from the subspace $\mathrm{span}(\Psi^{(\emptyset)})$ to the $\mathcal{K}_{G}% $-correlation state $|\psi_{\mathcal{K}_{G}}\rangle$, which is stabilized by the $\mathcal{K}_{G}$-stabilizers. This arrow-represented mapping is the summation (superposition) of the X-chain states over the correlation group $\braket{\mathcal{K}_G}$ (Proposition \ref{prop::correlation_states_explicit_form}). Since the graph state $\ket{G}$ is the only stabilized state of the stabilizers induced by the group $\braket{\Gamma_G}\times\braket{\mathcal{K}_G}$, it is identical to the $\mathcal{K}_{G}$-correlation state $|\psi_{\mathcal{K}_{G}}\rangle$ (Theorem \ref{theorem::graph_states_as_X-factorized_states}), which is represented by the equality of the last line in the factorization diagram. With the help of the factorization diagram in Fig. \ref{fig::determination_of_X-chain_state_XchainDiagram}, the graph state $|K_{4}^{\lnot1}\rangle$ is given by% \begin{equation} |K_{4}^{\lnot1}\rangle=\frac{1}{2}\left( \left\vert 1000\right\rangle +\left\vert 0010\right\rangle +\left\vert 0101\right\rangle -\left\vert 1111\right\rangle \right) . \end{equation} \bigskip \par Since the edge number $|E_{G[\xi]}|$ is identical to the product $\langle i_{Z}^{(\xi)},i_{Z}^{(\xi)}\rangle_{A_{G}}$ in Eq. \eqref{eq::graph_states_in_Z-basis}, according to the definition of the stabilizer-parity (Def. \ref{def::stabilizer_parity}), \begin{equation} \pi_{G}\left( \xi\right) = (-1)^{\langle i_{Z}^{(\xi)},i_{Z}^{(\xi)}\rangle_{A_{G}}}. \end{equation} Hence the representation of graph states in the Z-basis in Eq. \eqref{eq::graph_states_in_Z-basis} can be reformulated as% \begin{equation} |G\rangle=\frac{1}{2^{n/2}}\sum_{\xi\subseteq V_{G}}\pi_{G}\left( \xi\right) |i_{Z}^{(\xi)}\rangle. \label{eq::graph_states_in_Z-basis_with_pi} \end{equation} Comparing this Z-representation with the representation of a graph state in the X-basis given in Eq. \eqref{eq::graph_state_in_X_basis}, the number of terms in the representation is reduced from $2^{|V_{G}|}$ to $2^{|\mathcal{K}_{G}|}$. The correlation group $\braket{\mathcal{K}_{G}}$ can be directly obtained if one knows the X-chain group. The X-chain group can be searched by a criterion that the cardinality of the intersection of the vertex neighborhood with the X-chain $|N_v\cap \xi|$ should be even for all $v\in V_G$ \cite{WuKampermannBruss2015-XChainAlgo}. The search of the X-chains of a graph state $\ket{G}$ is equivalent to finding the 2-modulus-kernel of the adjacency matrix of the graph $G$. As this is efficient, the representation of graph states in the X-basis is feasible. The larger the X-chain group that a graph state possesses, the smaller is its correlation group and hence the more efficient is its X-chain factorization. \par Note that not every graph state has non-trivial X-chains (non-trivial means not the empty set). For graph states without non-trivial X-chains, their X-chain factorization contains all X-basis states, and thus has the same difficulty as their Z-representation. \par Besides, the X-chain factorization of graph states in Theorem \ref{theorem::graph_states_as_X-factorized_states} implies that the possible outcomes of X-measurements are only the X-chain states, $|\psi_{\emptyset}\left( \xi\right) \rangle$. Consequently two graph states with different X-chains can have different X-chain states, and hence are distinguishable via the X-measurement outcomes. In Table \ref{table::X-chain_states_of_3-vertex_graph_states}, we list the X-chain generators and X-chain states of graph states with $3$ vertices. Since the X-chain states of these graph states are different from each other, one can therefore distinguish these $8$ graph states via local X-measurements with non-zero probability of success. \def0.2\columnwidth{0.2\columnwidth} \begin{table}[ht!] \begin{tabular} [c]{|C{0.2\columnwidth}|C{0.25\columnwidth}|C{0.5\columnwidth}|}% \toprule $\ket{G}$ & $\Gamma_{G}$ & $\Psi^{(\emptyset)}_{\mathcal{K}_{G}}=\{\ket{\psi_{\emptyset}(\xi)}:\xi\in\braket{\mathcal{K}_G}\}$\\ \colrule \includegraphics[width=0.2\columnwidth]{3-vertex_graphs_1} & $\left\{ \{1\},\{2\},\{3\}\right\} $ & $\left\{ \left\vert 000\right\rangle \right\} $\\ \colrule \includegraphics[width=0.2\columnwidth]{3-vertex_graphs_2} & $\left\{ \{3\} \right\} $ & $\left\{\ket{000},\ket{010},\ket{100},-\ket{110}\right\}$ \\ \colrule \includegraphics[width=0.2\columnwidth]{3-vertex_graphs_3} & $\left\{ \{2\} \right\} $ & $\left\{\ket{000},\ket{001},\ket{100},-\ket{101}\right\}$ \\ \colrule \includegraphics[width=0.2\columnwidth]{3-vertex_graphs_4} & $\left\{ \{1\} \right\} $ & $\left\{\ket{000},\ket{001},\ket{010},-\ket{011}\right\}$ \\ \colrule \includegraphics[width=0.2\columnwidth]{3-vertex_graphs_5} & $\left\{ \{2,3\} \right\} $ & $\left\{\ket{000},\ket{100},\ket{011},-\ket{111}\right\}$ \\ \colrule \includegraphics[width=0.2\columnwidth]{3-vertex_graphs_6} & $\left\{ \{1,3\} \right\} $ & $\left\{\ket{000},\ket{100},\ket{101},-\ket{111}\right\}$ \\ \colrule \includegraphics[width=0.2\columnwidth]{3-vertex_graphs_7} & $\left\{ \{1,2\} \right\} $ & $\left\{\ket{000},\ket{001},\ket{110},-\ket{111}\right\}$ \\ \colrule \includegraphics[width=0.2\columnwidth]{3-vertex_graphs_8} & $\left\{ \{1,2,3\} \right\} $ & $\left\{ \left\vert 100\right\rangle ,\left\vert 010\right\rangle ,\left\vert 001\right\rangle ,-\left\vert 111\right\rangle \right\} $ \\ \botrule \end{tabular} \caption{X-chain states of 3-vertex graph states} \label{table::X-chain_states_of_3-vertex_graph_states} \end{table} \section{Application of the X-chain factorization} \label{sec::application_of_X-chains} The representation of graph states in the X-chain factorization reveals certain substructures of graph states. In this section, we discuss its usefulness for the calculation of graph state overlaps, the Schmidt decomposition and unilateral projections in bipartite systems. \subsection{Graph state overlaps} \label{sec::graph_state_overlap} In \cite{WuRKSKMBruss2014-05}, the overlaps of graph states are the basis for genuine multipartite entanglement detection of randomized graph states with projector-based witnesses $W_{G}=\id/2-\projector{G}$, see \cite{AcinBLSanpera2001-07,GuhneHBELMSanpera2002-12}, where $G$ is a connected graph. An expectation value $\mathrm{tr}(|H\rangle\langle H|G\rangle\langle G|)>1/2$ indicates the presence of genuine multipartite entanglement of the graph state $\ket{H}$. \par In general, a graph state $|G\rangle=\prod_{e\in E_{G}}U_{Z}^{(e)}|0_{X}\rangle$ is created by control-Z operators $U_{Z}^{(e)}$, where \begin{equation} U_{Z}^{\{v_a,v_b\}}:=\projector{0}^{(a)}\otimes\id^{(b)}+\projector{1}^{(a)}\otimes\sigma_Z^{(b)}. \end{equation} Since the operators $U_{Z}^{(e)}$ commute for different edges $e$ and are unitary and Hermitian, the overlap $\braket{G|H}$ is calculated by% \begin{equation} \label{eq::graph_state_overlap_1} \left\langle G|H\right\rangle =\langle0^{\otimes n}_{X}|\prod_{e\in E_{G}\Delta E_{H}% }U_{Z}^{(e)}|0^{\otimes n}_{X}\rangle=\langle0^{\otimes n}_{X}|G\Delta H\rangle. \end{equation} According to Eq. \eqref{eq::graph_states_in_Z-basis}, \begin{equation} \label{eq::graph_state_overlap_2} \langle G|H\rangle=\frac{1}{2^{n/2}}\sum_{i=0}^{2^{n}-1} \left( -1\right) ^{\left\langle i_{Z},i_{Z}\right\rangle _{A_{G\Delta H}}},% \end{equation} where $G\Delta H$ is the symmetric difference of the graphs $G$ and $H$. $G\Delta H$ is the graph $(V_{G\Delta H},E_{G\Delta H})$, whose vertices and edges are $V_{G\Delta H}=V_{G}=V_{H}$ and $E_{G\Delta H}=E_{G}\cup E_{H}\setminus E_{G}\cap E_{H}$, respectively. However, the complexity of this calculation increases exponentially with the size of the system. \par The quantity obtained from Eq. \eqref{eq::graph_states_in_Z-basis}, \begin{equation} \left\langle 0^{\otimes n}_{X}|G\right\rangle =\frac{1}{2^{n/2}}\sum_{i=0}^{2^{n}-1}\left(-1\right) ^{\left\langle i_{Z},i_{Z}\right\rangle _{A}},% \end{equation} corresponds to the difference of the positive and negative amplitudes of $\ket{G}$ in the Z-basis. We can define for each graph state $\ket{G}$ a Boolean function $f_{G}:=\braket{i_Z,i_Z}_{A} \pmod 2$ with $A$ being the adjacency matrix. The function $f_{G}$ is balanced, if and only if $\braket{0^{\otimes n}_X|G}=0$, otherwise it is biased. We introduce the bias degree of a graph state and define its Z-balance as follows. \begin{definition} [Bias degree and Z-balanced graph states]% \label{def::bias_degree_balanced_graph_states} \Needspace*{8\baselineskip} The \emph{(Z-)bias degree} $\beta$ of a graph state $|G\rangle$ with $n$ vertices is defined as the overlap \begin{equation} \beta(|G\rangle):=\langle0_{X}^{\otimes n}|G\rangle, \label{eq::def_bias_degree_of_graphs}% \end{equation} where $|0_{X}\rangle=\left( |0_{Z}\rangle+|1_{Z}\rangle\right) /\sqrt{2}$. A graph state with zero bias degree is called \emph{Z-balanced}. \hiddengls{biasDegree} \end{definition} \par The bias degree is related to the weight of a graph state, $\omega^{-}\left( G\right) :=\left\vert \left\{ i_{Z}:\langle i_{Z}|G\rangle/\left\vert \langle i_{Z}|G\rangle\right\vert =-1\right\} \right\vert $, which is equal to the number of minus amplitudes in $|G\rangle$ in the Z-basis \cite{CosentinoSimone2009-Weight}. The probability of finding a negative amplitude in the Z-basis is $1/2-\beta(|G\rangle)/2$, which is equal to $\omega^{-}\left( G\right) /2^{n}$. Note that as a result of Eq. \eqref{eq::graph_states_in_Z-basis_with_pi}, the bias degree of a graph state is equal to the sum of its stabilizer parities. \begin{equation} \beta(\ket{G}) = \sum_{\xi\subseteq V_G}\pi_G(\xi). \end{equation} \bigskip \par As a result of Theorem \ref{theorem::graph_states_as_X-factorized_states}, the bias degree $\Braket{0_x|G}$, depends only on the number of X-chain generators and the parity of their corresponding X-resources. \begin{corollary} [Graph state overlaps and bias degrees]\label{coro::balanced_and_orthogonal_graph_states} \Needspace*{10\baselineskip} The overlap of two graph states $\ket{G}$ and $\ket{H}$ is equal to the bias degree of the graph state $\ket{G \Delta H}$, i.e. \begin{equation}\label{eq::graph_state_overlap_coro} \braket{G|H} = \beta(\ket{G\Delta H}). \end{equation} The bias degree of a graph state $|G\rangle$ is equal to% \begin{equation} |\beta(|G\rangle)|=\frac{1}{2^{\left( n-\left\vert \Gamma_{G}\right\vert \right) /2}}\prod_{\gamma\in\Gamma_{G}}\delta_{\pi_{G}(\gamma)}^{1}, \end{equation} where $\Gamma_{G}$ is the X-chain generating set of $|G\rangle$, $\delta$ is the Kronecker-delta and $\pi _{G}(\gamma)$ is the stabilizer-parity of X-chain generators $\gamma$. \begin{proof} First we prove that there does not exist $\xi$ such that $c_{\xi}=x_{\Gamma}$. Assume $c_{\xi}=x_{\Gamma}$, then $|c_{\xi}\cap\gamma|\overset {\operatorname{mod}2}{=}|\xi\cap c_{\gamma}|=0$. However, according to the definition of $x_{\Gamma}$ (Def. \ref{prop::X-chain_states_in_X-basis}), $|c_{\xi}\cap\gamma|=|x_{\Gamma}\cap\gamma|=1$ which contradicts $|c_{\xi}% \cap\gamma|=0\operatorname{mod}2$. Then the only possible zero X-chain state is $\ket{i^{x_{(\Gamma)}}}$. Therefore Theorem \ref{theorem::graph_states_as_X-factorized_states} leads to% \begin{equation} |\beta(|G\rangle)|=\frac{1}{2^{\left( n-\left\vert \Gamma_{G}\right\vert \right) /2}}\langle 0_{X}|i^{(x_{\Gamma})}\rangle . \end{equation} According to the definition of the X-chain basis, $x_{\Gamma}=\emptyset$ if and only if $\pi_{G}\left( \gamma\right) =1$ for all X-chain generators $\gamma\in\Gamma_{G}$, that means $\left\langle 0_{X}|i^{(x_{\Gamma}% )}\right\rangle =\prod_{\gamma\in\Gamma_{G}}\delta_{\pi_{G}(\gamma)}^{1}$. \end{proof} \end{corollary} \noindent In \cite{CosentinoSimone2009-Weight}, the authors relate the weight $\omega^{-}\left( G\right) $ to the binary rank of the adjacency matrix of graphs. Our Corollary \ref{coro::balanced_and_orthogonal_graph_states} is a similar result showing that the bias degree depends on the binary rank of the adjacency matrix, which is equal to $n-\left\vert \Gamma_{G}\right\vert $. \bigskip \par Here, we focus on the bias degree and Z-balance of graph states. Since the X-chain group of a graph state can be efficiently determined, instead of Eq. \eqref{eq::graph_state_overlap_2}, Corollary \ref{coro::balanced_and_orthogonal_graph_states} provides an efficient method to calculate the graph state overlap. As a result of Corollary \ref{coro::balanced_and_orthogonal_graph_states}, we arrive at the following corollary. \begin{corollary}[Z-balanced graph states] A graph state is Z-balanced, if and only if it has at least one X-chain generator $\gamma^{-}$ with negative stabilizer-parity, i.e. $\left\vert E\left( G\left[ \gamma^{-}\right]\right) \right\vert $ is odd. Two graph states are orthogonal, if and only if $|G\Delta H\rangle$ is Z-balanced. \end{corollary} \par Knowing all the Z-balanced graph states with vertex number $n$ allows one to identify all pairs of orthogonal graph states with $n$ vertices. Note that relabeling a graph state (graph isomorphism) does not change its bias degree, since the structure of the X-chain group does not change under graph isomorphism. \par In Fig. \ref{fig::balanced_graph_states_up_to_5v}, the Z-balanced graph states up to five vertices are listed. Every graph in the figure represents an isomorphic class. From these balanced graph states one can obtain orthogonal graph states via the graph symmetric difference. Examples of orthogonal graph states derived from the Z-balanced graph states $|C_{3}\rangle$ and $|C_{5}\rangle$ are shown in Figs. \ref{fig::orthogonal_graph_states_C3} and \ref{fig::orthogonal_graph_states_C5} , respectively, ($C_{3}$ and $C_{5}$ are the first and fifth graph in Fig. \ref{fig::balanced_graph_states_up_to_5v} ). \begin{figure*}[ht!] \centering \begin{tabular}{|C{0.8\textwidth}|}\centering \includegraphics[width=0.95\linewidth]{balanced_graph_states_up_to_5v}\\ \end{tabular} \caption{\colorfig Z-balanced graph states (see Def. \ref{def::bias_degree_balanced_graph_states}) up to $5$ vertices: Each graph represents a graph isomorphic class. Each balanced graph state has at least one X-chain $\gamma^-$ with negative parity. In each graph, the $\gamma^-$-induced subgraph $G[\gamma^-]$ is highlighted in red with bold edges. Every highlighted $\gamma^-$-induced subgraph has an odd edge number.} \label{fig::balanced_graph_states_up_to_5v} \end{figure*}% \begin{figure*}[ht] \centering \input{tabular_orthogonal_graph_states_C3} \caption{Orthogonal graph states derived from the Z-balanced graph state $|C_3\rangle$: The graph states in each cell are orthogonal to each other. Their symmetric difference is identical to the cycle graph $C_3$, where $C_3$ is the first graph in Fig. \ref{fig::balanced_graph_states_up_to_5v}.} \label{fig::orthogonal_graph_states_C3} \end{figure*}% \begin{figure*}[ht] \centering \input{tabular_orthogonal_graph_states_C5} \caption{Orthogonal graph states derived from the Z-balanced graph state $|C_5\rangle$: The graph states in each cell are orthogonal. Their symmetric difference is identical to the cycle graph $C_5$, where $C_5$ is the fifth graph in Fig. \ref{fig::balanced_graph_states_up_to_5v}.} \label{fig::orthogonal_graph_states_C5} \end{figure*}% \subsection{Schmidt decomposition} \label{sec::schmidt_decomposition} In this section, we discuss the Schmidt decomposition of graph states represented in the X-basis, which is derived via the X-chain factorization. The Schmidt decomposition of a graph state for an $A|B$-bipartition reads% \begin{equation} |G\rangle=\frac{1}{2^{r_{S}/2}}\sum_{i=1}^{r_S}\ket{\phi_{i}^{(A)}}\ket{\psi_{i}^{(B)}}, \label{eq::Schmidt_decomposition_of_graph_states_general}% \end{equation} where $\braket{\phi_i^{(A)}|\phi_j^{(A)}}=\delta_{ij}$ and $\braket{\psi_i^{(B)}|\psi_j^{(B)}}=\delta_{ij}$. \hiddengls{SchmidtRank} Here $r_{S}$ is the Schmidt rank of the graph state $|G\rangle$ with respect to the partition $A$ versus $B$. Its value% \begin{equation} r_{S}=\left\vert S_{A}\right\vert :=\left\vert \left\{ s_{G}^{(\xi)}\in S_{G}:\text{supp}(s_{G}^{(\xi)})\subseteq A\right\} \right\vert \end{equation} is studied in the section III.B of Ref. \cite{HeinEisertBriegel2004-06} via the Schmidt decomposition of graph states in the Z-basis, where $\mathrm{supp}% (s_{G}^{(\xi)})$ is the support of the stabilizer $s_{G}^{(\xi)}$. The $\mathrm{supp}(s_{G}^{(\xi)})$ is equal to the projection on the Hilbert space spanned by qubits corresponding to the vertices $\xi\cup c_{\xi}$, which is the set of vertices on which the stabilizer $s_{G}^{(\xi)}$ acts non-trivially (i.e. not equal to the identity). \bigskip \par We derive the Schmidt decomposition of graph states in the X-basis in the following steps. First, we generalize the X-chain factorization of graph states (Theorem \ref{theorem::graph_states_as_X-factorized_states}) to the X-chain factorization of arbitrary correlation states (Theorem \ref{theorem::X-factorization_of_correlation_states}). Second, we introduce three correlation subgroups, whose correlation states are $A|B$-biseparable (Lemma \ref{lemma::biseparable_AB-correlation_states}). Third, we prove the orthonormality of these correlation states (Lemma \ref{lemma::orthonormality_of_AB-correlation_states}). At the end, we arrive at the Schmidt decomposition in Theorem \ref{theorem::Schmidt_decomposition_in_AB-correlation_states}. The X-chain factorization of graph states in Theorem \ref{theorem::graph_states_as_X-factorized_states} can be generalized to correlation states (introduced in Eqs. \eqref{eq::def_correlation_states} and \eqref{eq::correlation_state_explicit_form}) as follows. \begin{theorem} [X-chain factorization of $\mathcal{K}$-correlation states]% \label{theorem::X-factorization_of_correlation_states} \Needspace*{10\baselineskip} Let $\left\langle\mathcal{K}_{1}\right\rangle ,\left\langle \mathcal{K}_{2}\right\rangle \subseteq\left\langle \mathcal{K}_{G}\right\rangle $ be two disjoint correlation subgroups of a graph state $|G\rangle$, and $\mathcal{K}=\mathcal{K}_1\cup\mathcal{K}_2$. Then the $\mathcal{K}$-correlation state is a superposition of $\mathcal{K}_{1}$-correlation states,% \begin{equation} |\psi_{\mathcal{K}}\left( \xi\right) \rangle =\frac{1}{2^{\left\vert \mathcal{K}_{2}\right\vert /2}}\sum_{\xi^{\prime}% \in\left\langle \mathcal{K}_{2}\right\rangle }|\psi_{\mathcal{K}_{1}}\left( \xi\Delta\xi^{\prime}\right) \rangle \end{equation} with $\xi\in\braket{\mathcal{K}_{G}}/\braket{\mathcal{K}} $ being an element in their quotient group. Theorem \ref{theorem::graph_states_as_X-factorized_states} is a special case of this theorem related by $\braket{\mathcal{K}}=\left\langle \mathcal{K}_{1}\right\rangle \times\left\langle \mathcal{K}_{2}\right\rangle =\mathcal{\emptyset}\times\left\langle \mathcal{K}_{G}\right\rangle $. \begin{proof} According to the definition in Eq. (\ref{eq::def_correlation_states}) it holds \begin{equation} |\psi_{\mathcal{K}_{1}\cup\mathcal{K}_{2}}\left( \xi\right) \rangle =s_{G}^{(\xi)}\prod_{\kappa\in\mathcal{K}_{2}}\frac{1+s_{G}^{(\kappa)}}% {\sqrt{2}}\prod_{\kappa\in\mathcal{K}_{1}}\frac{1+s_{G}^{(\kappa)}}{\sqrt{2}% }\left\vert i^{(x_{\Gamma})}\right\rangle . \end{equation} Due to the commutativity of the graph state stabilizers it follows% \begin{equation} |\psi_{\mathcal{K}}\left( \xi\right) \rangle=|\psi_{\mathcal{K}_{1}% \cup\mathcal{K}_{2}}\left( \xi\right) \rangle=\prod_{\kappa\in \mathcal{K}_{2}}\frac{1+s_{G}^{(\kappa)}}{\sqrt{2}}|\psi_{\mathcal{K}_{1}% }\left( \xi\right) \rangle. \end{equation} According to Proposition \ref{prop::isomorphism_of_vertex-induction_operation}% , $s_{G}^{(\kappa_{1}\Delta\kappa_{2})}=s_{G}^{(\kappa_{1})}s_{G}^{(\kappa _{2})}$, the product of $(1+s_{G}^{(\kappa)})$ with $\kappa\in\mathcal{K}_{2}$ becomes the sum of the stabilizers $s_{G}^{(\xi^{\prime})}$ with $\xi^{\prime }\in\left\langle \mathcal{K}_{2}\right\rangle $. \begin{align} |\psi_{\mathcal{K}}\left( \xi\right) \rangle & =\frac{1}{2^{\left\vert \mathcal{K}_{2}\right\vert /2}}\sum_{\xi^{\prime}\in\left\langle \mathcal{K}_{2}\right\rangle }s_{G}^{(\xi^{\prime})}|\psi_{\mathcal{K}_{1}% }\left( \xi\right) \rangle\nonumber\\ & =\frac{1}{2^{\left\vert \mathcal{K}_{2}\right\vert /2}}\sum_{\xi^{\prime }\in\left\langle \mathcal{K}_{2}\right\rangle }|\psi_{\mathcal{K}_{1}}\left( \xi\Delta\xi^{\prime}\right) \rangle, \end{align} where the second equality is a result of property \ref{item::correlation_states_property_2} in Corollary \ref{coro::properties_of_correlation_states}. \end{proof}% \end{theorem} \begin{algorithm}[Factorization diagram of correlation states] \label{algo::factorization_diagram_of_correlation_states} \Needspace*{8\baselineskip} Theorem \ref{theorem::X-factorization_of_correlation_states} can be interpreted by the factorization diagram in Fig. \ref{fig::factorization_diagram_correlation_states}. \begin{enumerate} \item One decomposes the group $\mathcal{P}(V_G)$ into the direct product of the X-chain group $\braket{\Gamma_G}$ and the correlation group $\braket{\mathcal{K}_G}$. \item From the X-chain group $\braket{\Gamma_G}$, one obtains the set of X-chain states $\Psi^{\emptyset}_{\mathcal{K}_G}$. \item From the correlation group $\braket{\mathcal{K}_1}$, one obtains graph states via the superposition of the X-chain states in $\Psi^{\emptyset}_{\mathcal{K}_G}$ within $\braket{\mathcal{K}_1}$. \item At the end the correlation state $\ket{\psi_{\mathcal{K}_1\cup \mathcal{K}_2}(\xi)}$ is the superposition of the $\mathcal{K}_1$-correlation states $\ket{\psi_{\mathcal{K}_1}(\xi\Delta\xi')}\in\Psi^{(\mathcal{K}_1)}_{\mathcal{K}_G}$ inside the correlation group $\xi'\in\braket{\mathcal{K}_2}$ (Theorem \ref{theorem::X-factorization_of_correlation_states}). \end{enumerate} \end{algorithm}% \begin{figure}[ht!] \centering \corrstatediagram[heightScale=0.7] \caption{\colorfig The X-chain factorization diagram of correlation states: A graphical summary of Theorem \ref{theorem::X-factorization_of_correlation_states}. The $\xi$ in $|\psi_{\mathcal{K}_{1}\cup\mathcal{K}_{2}}\left( \xi\right) \rangle$ are elements in the quotient group, $\xi\in \braket{\mathcal{K}_G}/\braket{\mathcal{K}_1\cup \mathcal{K}_2}$. } \label{fig::factorization_diagram_correlation_states} \end{figure} \noindent The subspace of X-chain states $\mathrm{span}(\Psi_{K_{G}}^{(\emptyset)})$ are projected via $\braket{\mathcal{K}_1}$-stabilizers to the space spanned by the $\mathcal{K}_{1}$-correlation states $|\psi_{\mathcal{K}_{1}}\left( \xi\right) \rangle$. Further, the subspace $\mathrm{span}(\Psi_{K_{G}% }^{(\mathcal{K}_{1})})$ are then projected via $\braket{\mathcal{K}_2}$% -stabilizers to the $\mathcal{K}_{1}\cup\mathcal{K}_{2}$-correlation states $|\psi_{\mathcal{K}_{1}\cup\mathcal{K}_{2}}\left( \xi\right) \rangle$. With this theorem, one can obtain the Schmidt decomposition of graph states, by appropriate selection of the correlation subgroup $\mathcal{K}_{1}$, such that its corresponding $\mathcal{K}_{1}$-correlation states are $A|B$-separable and mutually orthonormal. \bigskip \par Let $|G\rangle$ be a graph state with the correlation group $\left\langle \mathcal{K}_{G}\right\rangle $ and $A|B$ be a bipartition of its vertices. In order to find the Schmidt decomposition, we select $\braket{K_1}$ as the union of three disjoint correlation subgroups specified as follows. \begin{enumerate} \item The correlation subgroup, whose elements possess a correlation index only in $B$:% \begin{equation} \langle\mathcal{K}^{(B)}\rangle:=\left\{ \xi\in\left\langle \mathcal{K}% _{G}\right\rangle :c_{\xi}\subseteq B\right\} .\label{eq::def_KappaB_in_AB_bipartition}% \end{equation} \hiddengls{corrGroupToB} \item The correlation subgroup, whose elements possess a correlation index only in $A$ and only consists of vertices in $A$: \begin{equation} \langle\mathcal{K}_{A}^{(A)}\rangle:=\left\{ \xi\in\left\langle \mathcal{K}_{G}\right\rangle :c_{\xi}\subseteq A,\xi\subseteq A\right\} .\label{eq::def_KappaAA_in_AB_bipartition}% \end{equation} \hiddengls{corrGroupAToA} \item The correlation subgroup, whose elements possess a correlation index only in $A$, consists of vertices in $B$ and has an even number of edges between all $\beta \in\left\langle \mathcal{K}^{(B)}\right\rangle $:% \begin{align} \langle\mathcal{K}_{\sim B}^{(A)}\rangle & :=\left\{ \xi \in\left\langle \mathcal{K}_{G}\right\rangle :c_{\xi}\subseteq A,\xi \not \subseteq A\right\} \cap\{\xi\in\left\langle \mathcal{K}_{G}% \right\rangle :\nonumber\\ & \left\vert E_{G}\left( \xi:\beta\right) \right\vert \overset {\operatorname{mod}2}{=}0\text{, for all }\beta\in\left\langle \mathcal{K}% ^{(B)}\right\rangle \}.\label{eq::def_KappaA_B_in_AB_bipartition}% \end{align} \hiddengls{corrGroupSimBToA} \end{enumerate} These three groups form a special group \begin{equation} \langle\mathcal{K}^{A\rfloor B}\rangle:=\langle\mathcal{K}_{A}^{(A)}% \cup\mathcal{K}_{\sim B}^{(A)}\rangle\times\langle\mathcal{K}_{{}}% ^{(B)}\rangle \end{equation} called \emph{$A\rfloor B$-correlation group}. \hiddengls{corrGroupASepB}\hiddengls{corrASepBState} (The notation ``$A\rfloor B$'' is used, as the group is not symmetric with respect to exchanging $A$ and $B$.) We will show in Lemma \ref{lemma::biseparable_AB-correlation_states} that all $A\rfloor B$-correlation states $|\psi_{\mathcal{K}^{A\rfloor B}}(\xi)\rangle$ with $\xi\in\left\langle \mathcal{K}_{G}\right\rangle /\langle\mathcal{K}^{A\rfloor B}\rangle$, are $A|B$-separable. The corresponding quotient group is denoted as% \begin{equation} \langle\mathcal{K}^{A\rightharpoonup B}\rangle:=\left\langle \mathcal{K}% _{G}\right\rangle /\langle\mathcal{K}^{A\rfloor B}\rangle \end{equation} and called \emph{$\left( A\rightharpoonup B\right)$-correlation group}. \hiddengls{corrGroupAandB} (The notation $A\rightharpoonup B$ is introduced, as there is again no symmetry under exchange of $A$ and $B$, as the correlation index $c_{\xi}$ of $\xi\in\braket{\mathcal{K}^{A\rightharpoonup B}}$ is always inside $A$.) We will show in Theorem \ref{theorem::Schmidt_decomposition_in_AB-correlation_states} that the Schmidt rank of $\ket{G}$ is equal to the cardinality $|\braket{\mathcal{K}^{A\rightharpoonup B}}|$. That means that the correlation subgroup $\mathcal{K}^{A\rightharpoonup B}$ generates the $A|B$ correlation in the graph state $\ket{G}$. Note that we investigated many graphs and found their correlation subgroups $\langle\mathcal{K}_{\sim B}^{(A)}\rangle$ all to be empty. That means the group $\langle\mathcal{K}_{\sim B}^{(A)}\rangle$ may not exist for any graph state. However, this is still an open question. \def\{\{1,2,3\}\}{\{\{1,2,3\}\}} \def\{\{2,3\}\}{\{\{2,3\}\}} \def\{\{4,5\},\{2,3,4\}\}{\{\{4,5\},\{2,3,4\}\}} \def\emptyset{\emptyset} \def\{\{2\}\}{\{\{2\}\}} \par \begin{figure*}[ht!] \centering \subfloat[]{ \includegraphics[width=0.2\textwidth]{house_graph} \label{fig::house_graph} } \subfloat[]{ \includegraphics[width=0.8\textwidth]{house_graph_A-B_factorization} \label{fig::house_graph_binary_representation} } \\ \subfloat[]{ \ABfactorizationdiagram[ GammaG={\{\{1,2,3\}\}}, KappaAA={\{\{2,3\}\}}, KappaB={\{\{4,5\},\{2,3,4\}\}}, KappaAfromB={\{\{2\}\}}, xG=10000, KappaAfromBNum=1, widthScale=1.1 ] \label{fig::house_graph_factorization_diagram} } \\ \caption{\colorfig $A\rfloor B$-factorization of graph states: (a) The graph state $\ket{G_{\mathrm{House}}}$ corresponding to a ``St. Nicholas's house'' is divided in two subsystems $A=\{1,2,3\}$ and $B=\{4,5\}$. (b) The binary representation of the X-chain factorization (the upper row) and $A\rfloor B$-factorization (the lower row). (c) The $A\rfloor B$-factorization diagram (see Algorithm \ref{algo::factorization_diagram_AB_correlation}) of the ``St. Nicholas's house'' graph state $\ket{G_{\mathrm{House}}}$.} \label{fig::house_graph_A-B_factorization} \end{figure*} \par In this $A\rfloor B$-factorization, the correlation group $\mathcal{K}_{G}$ is divided into four subgroups. Let us take the graph of ``St. Nicholas's house'' in Fig. \ref{fig::house_graph} as an example. This ``house'' state $|G_{\mathrm{House}}\rangle$ is divided into the bipartition $A=\{1,2,3\}$ versus $B=\{4,5\}$. The correlation group factorization is shown in Fig. \ref{fig::house_graph_binary_representation}. The X-chain group of $\ket{G_{\mathrm{House}}}$ is $\{\{1,2,3\}\}$. The X-resources are factorized by the X-chain group, $\mathcal{P}(V_{G})=\braket{\Gamma_G}\times \braket{\mathcal{K}_G}$, see the upper row in Fig. \ref{fig::house_graph_binary_representation}. The array is the binary representation of the stabilizers induced by the X-chain generators $\Gamma=\{\{1,2,3\}\}$ and correlation group generators $\mathcal{K}_G=\{\{2\},\{3\},\{4\},\{5\}\}$, it corresponds to the incidence structure on its right hand side. In the second row of Fig. \ref{fig::house_graph_binary_representation}, the X-resources, whose correlation indices lie in the system $B$, are first grouped together into $\braket{\mathcal{K}^{(B)}}=\braket{\{\{4,5\},\{2,3,4\}\}}$. Second, the X-resources $\xi$, whose correlation indices $c_{\xi}$ and $\xi$ itself are both contained by $V_{A}$, are grouped into $\braket{\mathcal{K}_{A}^{(A)}}=\braket{\{\{2,3\}\}}$. Third, the group $\mathcal{K}_{\sim B}^{(A)}$ is empty. At the end, the $\left( A\rightharpoonup B\right) $-correlation group is then $\braket{\mathcal{K}^{A\rightharpoonup B}}=\braket{\{\{2\}\}}$. \par These three special correlation subgroups, $\braket{\mathcal{K}_{A}^{(A)}},$ $\braket{\mathcal{K}_{\sim B}^{(A)}}$ and $\braket{\mathcal{K}^{(B)}}$, project the space spanned by the X-chain states into a subspace spanned by their correlation states $|\psi_{A\rfloor B}(\xi)\rangle$. These states are $A|B$-separable states, which is stated in the following lemma. \begin{lemma} [$A|B$-Separability of $A\rfloor B$-correlation states]% \label{lemma::biseparable_AB-correlation_states} \Needspace*{8\baselineskip} For $\xi\in\langle \mathcal{K}_{G}^{A\rightharpoonup B}\rangle$, the $\left( A\rightharpoonup B\right) $-correlation states% \begin{equation} |\psi_{A\rfloor B}(\xi)\rangle=\pi_{G}\left( \xi\right) |\phi_{A\rfloor B}^{(A)}(\xi)\rangle|\phi_{A\rfloor B}^{(B)}(\xi)\rangle \label{eq::def_AB-correlation_states}% \end{equation} are $A|B$-separable with $|\phi_{A\rfloor B}^{(A)}(\xi)\rangle:=|\psi _{\mathcal{K}_{A}^{(A)}\cup\mathcal{K}_{\sim B}^{(A)}}^{(A)}(\xi)\rangle$ and $|\phi_{A\rfloor B}^{(B)}(\xi)\rangle:=|\psi_{\mathcal{K}^{(B)}}^{(B)}(\xi)\rangle$ being the $( \mathcal{K}_{A}^{(A)}\cup\mathcal{K}% _{\sim B}^{(A)})$- and $\mathcal{K}^{(B)}$-correlation states projected into the subspaces of $A$ and $B$, respectively. \hiddengls{corrAsepBStateOnA}\hiddengls{corrAsepBStateOnB} \begin{proof} See Appendix \ref{sec::graph_states_proofs}. \end{proof} \end{lemma} \noindent Note that $|\psi_{A\rfloor B}(\xi)\rangle$ will be shown to be the Schmidt basis in Theorem \ref{theorem::Schmidt_decomposition_in_AB-correlation_states}. There, one will also see that the global phase $\pi_G(\xi)$ ensures positive Schmidt coefficients. \bigskip \par Let us continue to consider the ``St. Nicholas's house'' state as an example. According to Proposition \ref{prop::X-chain_states_in_X-basis}, the fundamental X-chain state of $|G_{\text{House}}\rangle$ is $|i^{x_{\Gamma}}% \rangle=|10000\rangle$. Then from the $\mathcal{K}_{A}^{(A)}$-correlation states, \def\phiAempty{\frac{|100\rangle -|111\rangle }{\sqrt{2}}}% \def\phiAone{\frac{|001\rangle +|010\rangle }{\sqrt{2}}}% \def\phiBempty{\frac{|00\rangle -|01\rangle -|10\rangle -|11\rangle}{2}}% \def\phiBone{\frac{-|00\rangle-|01\rangle-|10\rangle+|11\rangle}{2}}% \def\xGammaA{\ket{100}}% \def\xGammaB{\ket{00}}% \begin{align} |\psi_{\mathcal{K}_{A}^{(A)}}\left( \emptyset\right) \rangle & =|\phi_{A\rfloor B}^{(A)}(\emptyset)\rangle \otimes\xGammaB\text{ and}\\ |\psi_{\mathcal{K}_{A}^{(A)}}\left( \{2\}\right) \rangle & =|\phi_{A\rfloor B}^{(A)}\left( \{2\}\right)\rangle\otimes\xGammaB\text{,}% \end{align} one can read off% \begin{align} |\phi_{A\rfloor B}^{(A)}(\emptyset)\rangle & =\phiAempty \text{ and }\\ |\phi_{A\rfloor B}^{(A)}\left( \{2\}\right) \rangle & =\phiAone. \end{align} From the $\mathcal{K}^{(B)}$-correlation states, \begin{align} |\psi_{\mathcal{K}^{(B)}}\left( \emptyset\right) \rangle & =\xGammaA\otimes |\phi_{A\rfloor B}^{(B)}(\emptyset)\rangle \text{ and}\\ |\psi_{\mathcal{K}^{(B)}}\left( \{2\}\right) \rangle & =\xGammaA\otimes |\phi_{A\rfloor B}^{(B)}\left( \{2\}\right) \rangle ,% \end{align} one can read off% \begin{align} |\phi_{A\rfloor B}^{(B)}(\emptyset)\rangle & =\phiBempty\text{ and }\\ |\phi_{A\rfloor B}^{(B)}\left( \{2\}\right) \rangle & =\phiBone. \end{align} According to Lemma \ref{lemma::biseparable_AB-correlation_states}, $A\rfloor B$-correlation states are \begin{align} & |\psi_{A\rfloor B}(\emptyset)\rangle\nonumber\\ & =\left( \phiAempty \right) \left( \phiBempty \right) \label{eq::house_state_A_B_correlation_state_1}% \end{align} and since $\pi_{G}(\{2\})=1$,% \begin{align} & |\psi_{A\rfloor B}(\{2\})\rangle\nonumber\\ & =\left( \phiAone \right) \left( \phiBone \right). \label{eq::house_state_A_B_correlation_state_2}% \end{align} \bigskip \par Orthonormality of the states within the subspaces still needs to be verified. This holds for the explicit example $\ket{G_{\mathrm{House}}}$ in Eqs. \eqref{eq::house_state_A_B_correlation_state_1} and \eqref{eq::house_state_A_B_correlation_state_2}. In the general case, the orthonormality is shown in the following lemma. \begin{lemma}[Orthonormality of $(A\rightharpoonup B)$-correlation states] \label{lemma::orthonormality_of_AB-correlation_states} \Needspace*{8\baselineskip} The components of $A\rfloor B$-correlation states on subspace $A$ and $B$, $|\phi_{A\rfloor B}^{(A)}% (\xi)\rangle$ and $|\phi_{A\rfloor B}^{(B)}(\xi)\rangle$, are orthonormal with respect to $\xi\in\langle\mathcal{K}^{A\rightharpoonup B}\rangle$ within the subspaces $A$ and $B$, respectively, i.e., \begin{equation} \langle\phi_{A\rfloor B}^{(A)}(\xi_{1})|\phi_{A\rfloor B}^{(A)}(\xi _{2})\rangle=0 \end{equation} and% \[ \langle\phi_{A\rfloor B}^{(B)}(\xi_{1})|\phi_{A\rfloor B}^{(B)}(\xi _{2})\rangle=0 \] for all $\xi_{1},\xi_{2}\in\langle\mathcal{K}^{A\rightharpoonup B}\rangle$ and $\xi_{1}\not =\xi_{2}$. \begin{proof} See Appendix \ref{sec::graph_states_proofs}. \end{proof} \end{lemma} \bigskip \par We can now construct the Schmidt decomposition of graph states with $A\rfloor B$-correlation states as follows. \begin{theorem} [Schmidt decomposition in $A\rfloor B$-correlation states]% \label{theorem::Schmidt_decomposition_in_AB-correlation_states} \Needspace*{12\baselineskip}% The Schmidt decomposition of a graph state $|G\rangle$ is the superposition of its $A\rfloor B$-correlation states, \begin{equation} |G\rangle=\frac{1}{2^{\left\vert \mathcal{K}^{A\rightharpoonup B}\right\vert /2}}\sum_{\xi\in\left\langle \mathcal{K}^{A\rightharpoonup B}\right\rangle }\pi _{G}\left( \xi\right) |\phi_{A\rfloor B}^{(A)}(\xi)\rangle|\phi_{A\rfloor B}^{(B)}(\xi)\rangle.\label{eq::theorem_schmidt_decomp_in_AB_states}% \end{equation} The Schmidt rank $r_S$ and geometric measure of the $A|B$-bipartite entanglement \cite{Shimony1995-BiGeoM, BarnumLinden2001-GeoMeasure} can be expressed by% \begin{equation} \log_2(r_{S})=\mathcal{E}_{g}^{A|B}=\left\vert \mathcal{K}^{A\rightharpoonup B}\right\vert \label{eq::theorem_schmidt_rank_of_graph_states}% \end{equation} with $\mathcal{E}_{g}^{(A|B)}\left( |G\rangle\right) :=-2\log_{2}\left( \min_{\psi }\left\vert \left\langle \psi_{A}\psi_{B}|G\right\rangle \right\vert \right) $. \hiddengls{BiEntGeoMeas} \begin{proof} Employing Theorem \ref{theorem::graph_states_as_X-factorized_states} and \ref{theorem::X-factorization_of_correlation_states} together with Lemma \ref{lemma::biseparable_AB-correlation_states} one can prove that the graph state $|G\rangle$ is equal to the superposition of all biseparable $A\rfloor B$-correlation states $|\psi_{A\rfloor B}(\xi)\rangle=\pi_{G}(\xi )|\phi_{A\rfloor B}^{(A)}(\xi)\rangle|\phi_{A\rfloor B}^{(B)}(\xi)\rangle$. As a result of the orthonormality of $|\phi_{A\rfloor B}^{(A)}(\xi)\rangle$ and $|\phi_{A\rfloor B}^{(B)}(\xi)\rangle$ (Lemma \ref{lemma::orthonormality_of_AB-correlation_states}), Eq. (\ref{eq::theorem_schmidt_decomp_in_AB_states}) is a Schmidt decomposition. The bipartite geometric measure of entanglement is equal to the maximum singular value $s_{\max}$ of the matrix $M_{ij}:=\left\{ \langle i_{A}% j_{B}|G\rangle\right\} _{i,j}$ with $i=0,...,2^{\left\vert V_{A}\right\vert }-1$ and $i=0,...,2^{\left\vert V_{B}\right\vert }-1$ \cite{Shimony1995-BiGeoM}. For the bipartite case the singular value decomposition is equivalent to the Schmidt decomposition. Since the Schmidt coefficients are all $2^{-\left\vert \mathcal{K}^{A\rightharpoonup B}\right\vert /2}$, it follows that the geometric measure of bipartite entanglement of a graph state, $\mathcal{E}^{A|B}_{g}:=-2\log_{2}\left( s_{\max}\right) $, is equal to the log of the Schmidt rank, i.e. $\log_2(r_{S})=\left\vert \mathcal{K}^{A\rightharpoonup B}\right\vert $. As a result, the $A\rfloor B$-correlation states $\pi_{G}(\xi)|\phi_{A\rfloor B}^{(A)}% (\xi)\rangle|\phi_{A\rfloor B}^{(B)}(\xi)\rangle$ are the $A|B$-separable states, which are closest to $\ket{G}$. \end{proof} \end{theorem} \par According to \cite{GraphStateReviews2006}, the Schmidt rank is given by $\log_{2}\left\vert \left\{ \sigma\in S_{G}:\text{supp}\left( \sigma\right) \subseteq V_{A}\right\} \right\vert $ with $\left\vert V_{A}\right\vert \leq\left\vert V_{B}\right\vert $, which is $\left\vert V_{A}\right\vert -\left\vert \mathcal{K}_{A}^{(A)}\right\vert -\left\vert \Gamma_{G}\cap \mathcal{P}\left( V_{A}\right) \right\vert $ in the language of the X-chain factorization. The Schmidt rank is also equal to the cardinality of the matching \footnote{Note that the matching between two parties is not unique, but its cardinality is fixed.} between $A$ and $B$ \cite{HajdusekMurao2013-Direct}. The matching is the set of edges between $A$ and $B$, which do not mutually share any common vertex \cite{Diestel_GraphTheory}. Hence the cardinality $\left\vert \mathcal{K}^{A\rightharpoonup B}\right\vert $ should be equal to the matching. However the proof of this equality is still an open question. \bigskip \par The result of this section can be summarized in an $A\rfloor B$-factorization diagram. \begin{algorithm}[Factorization diagram: Schmidt decomposition of graph states] \label{algo::factorization_diagram_AB_correlation} \Needspace*{8\baselineskip} The Schmidt decomposition of graph states in Theorem \ref{theorem::Schmidt_decomposition_in_AB-correlation_states} can be summarized in the factorization diagram of Fig. \ref{fig::factorization_diagram_AB_correlation}. \begin{enumerate} \item The group $\mathcal{P}(V_G)$ is decomposed into the direct product of $\braket{\Gamma_G}$, $\braket{\mathcal{K}^{A\rfloor B}}=\braket{\mathcal{K}_A^{(A)}\cup\mathcal{K}_{\sim B}^{(A)}}\times\braket{\mathcal{K}^{(B)}}$ and $\braket{\mathcal{K}^{A\rightharpoonup B}}$. \item Via the X-chain group $\braket{\Gamma_G}$, one obtains the set of X-chain states $\Psi^{\emptyset}$. \item The Schmidt basis states $\ket{\phi_{A\rfloor B}^{(A)}(\xi})$ are constructed from the superposition of states in $\Psi^{\emptyset}$ inside the correlation group $\braket{\mathcal{K}_A^{(A)}\cup\mathcal{K}_{\sim B}^{(A)}}$ (Lemma \ref{lemma::biseparable_AB-correlation_states}). \item Similar to the previous step, one obtains the states $\ket{\phi_{A\rfloor B}^{(B)}(\xi)}$ via the correlation group $\braket{\mathcal{K}^{(B)}}$ (Lemma \ref{lemma::biseparable_AB-correlation_states}). \item Together with the stabilizer-parities $\pi_G(\xi)$, the set of $A\rfloor B$-correlation states $\Psi^{(A\rfloor B)}$ (Lemma \ref{lemma::biseparable_AB-correlation_states}) is constructed. \item Via the $(A\rightharpoonup B)$-correlation group $\braket{\mathcal{K}^{A\rightharpoonup B}}$, one obtains the Schmidt decomposition from the superposition of states in $\mathrm{span}(\Psi^{(A\rfloor B)})$ (Lemma \ref{lemma::orthonormality_of_AB-correlation_states} and Theorem \ref{theorem::Schmidt_decomposition_in_AB-correlation_states}). \end{enumerate} \end{algorithm} \begin{figure*}[ht!] \centering \ABfactorizationdiagram \caption{\colorfig X-chain factorization diagram for the Schmidt decomposition of graph states in the X-basis. A graphical summary of Lemmas \ref{lemma::biseparable_AB-correlation_states} and \ref{lemma::orthonormality_of_AB-correlation_states}, and Theorem \ref{theorem::Schmidt_decomposition_in_AB-correlation_states}. } \label{fig::factorization_diagram_AB_correlation} \end{figure*} \par The $A\rfloor B$-factorization diagram of $|G_{\text{House}}\rangle$ is shown in Fig. \ref{fig::house_graph_factorization_diagram}. As a result of this theorem, the Schmidt decomposition of this state is \begin{equation} |G_{\text{House}}\rangle=\frac{1}{\sqrt{2}}\left( |\psi_{A\rfloor B}(\emptyset)\rangle+|\psi_{A\rfloor B}(\left\{ 2 \right\} )\rangle \right) \end{equation} with $|\psi_{A\rfloor B}(\emptyset)\rangle$ and $|\psi_{A\rfloor B}(\left\{2\right\} )\rangle$ being given in Eqs. \eqref{eq::house_state_A_B_correlation_state_1} and \eqref{eq::house_state_A_B_correlation_state_2}. The house state has Schmidt rank $r_{S}=2$ and the geometric measure of bipartite entanglement $E_g^{(A|B)}=-2\log_{2}\left( \min_{\psi}\left\vert \left\langle \psi _{A}\psi_{B}|G_{\text{House}}\right\rangle \right\vert \right) = 1 $. \subsubsection{Entanglement localization of graph states protected against errors} \label{sec::unilateral_projection_against_errors} \begin{figure*}[ht!] \centering \subfloat[]{ \includegraphics[width=0.4\textwidth]{Bistar_error_correction} \label{fig::projection_against_error_bistar} } \subfloat[]{ \includegraphics[width=0.55\textwidth]{BiStar_AB_factorization} \label{fig::projection_against_error_AB-factorization} }% \caption{\colorfig An example of entanglement localization of graph states protected against errors: (a) Local X-measurements on subsystem $A$ project the graph state $\ket{G}$ onto the maximally entangled state $\ket{\phi_{A\rfloor B}^{(B)}(\xi)}$ for subsystem $B$. Under the assumption of a single qubit error, the outcome $\ket{m_X^{(A)}}=\ket{110}$ indicates a Z-error on vertex $3$. Alice sends Bob the corrected outcome $(111)$, such that Bob knows from the Schmidt decomposition that he possesses the state $\ket{\phi_{A\rfloor B}^{(B)}(\{4\})}$. (b) Binary representation and incidence structure after $A\rfloor B$-factorization. } \label{fig::projection_against_error} \end{figure*} In this section, we consider the localization of entanglement \cite{PoppVMCirac2005-LocalizableEnt} on graph states shared between Alice and Bob ($A|B$-bipartition); see Fig. \ref{fig::projection_against_error_bistar}. Alice measures the graph state with Pauli-measurements on her system, then tells Bob her measurement results via a classical channel. At the end, Bob should possess a bipartite maximally entangled state which he knows. A connected graph state is maximally ``connected'' with respect to entanglement localization, if every pair of vertices can be projected onto a Bell pair with local measurements \cite{GraphStateReviews2006}. The simplest approach to localize the entanglement of $\ket{G}$ in the subsystem $\{B_1,B_2\}$ is finding a path between $B_1$ and $B_2$, then removing vertices outside the path with Z-measurements, and, at the end, measuring each vertex on the path between $\{B_1,B_2\}$ in the X-direction. However, the resulting state depends on the measurement outcomes. If errors occur in Alice's measurements, it will lead to a wrong state of Bob. Therefore, error correction would be a nice feature in the entanglement localization of graph states. \par Graph states are stabilizer states. These states can be exploited as quantum stabilizer codes \cite{Gottesman1996-QECSQHB, Gottesman1997-phdQEC, NielsenChuang2004-QCQI, GraphStateReviews2006}, which are linear codes and protect against errors. In the Schmidt decomposition, the measurement outcomes on the system $A$ imply which states are projected in the system $B$. The existence of X-chains on Alice's side can provide simple repetition codes as the Schmidt basis in the Schmidt decomposition in the X-basis. Therefore, instead of removing the vertices outside a selected path between $B_1$ and $B_2$, we will make X-measurements on them to take the benefit of X-chains for the error correction. \par The graph state $\ket{G}$ in Fig. \ref{fig::projection_against_error_bistar} is taken as an example. This state has the X-chain generating set $\Gamma_G=\{\{1,2\},\{1,3\},\{4,5\}\}$. The generating set of the three correlation groups (Eq. \eqref{eq::def_KappaB_in_AB_bipartition}, \eqref{eq::def_KappaAA_in_AB_bipartition} and \eqref{eq::def_KappaA_B_in_AB_bipartition}) for the Schmidt decomposition are $\mathcal{K}_A^{(A)}=\mathcal{K}_{\sim B}^{(A)}=\emptyset$ and $\mathcal{K}^{(B)}=\{\{1\}\}$, while the generating set of the $(A\rightharpoonup B)$-correlation group is $\mathcal{K}^{(A\rightharpoonup B)}=\{\{4\}\}$. According to Theorem \ref{theorem::Schmidt_decomposition_in_AB-correlation_states} and with the help of Algorithm \ref{algo::factorization_diagram_AB_correlation}, one has \begin{equation} |\psi_{A\rfloor B}(\emptyset)\rangle=\left\vert 000\right\rangle \frac{\left\vert 00\right\rangle +\left\vert 11\right\rangle }{\sqrt{2}}% \end{equation} and% \begin{equation} |\psi_{A\rfloor B}(\{4\})\rangle=\left\vert 111\right\rangle \frac{\left\vert 00\right\rangle -\left\vert 11\right\rangle }{\sqrt{2}}. \end{equation} As a result, the Schmidt decomposition of the graph state is \begin{equation} |G\rangle=\frac{1}{\sqrt{2}}\left( \left\vert 000\right\rangle \frac {\left\vert 00\right\rangle +\left\vert 11\right\rangle }{\sqrt{2}}+\left\vert 111\right\rangle \frac{\left\vert 00\right\rangle -\left\vert 11\right\rangle }{\sqrt{2}}\right) .\label{eq::Schmidt_decomposition_in_unilateral_projection}% \end{equation} In this example, one observes that there are $2$ X-chain generators $\{1,2\}$ and $\{1,3\}$ on Alice's $3$-qubit system. This encodes the following $[3,1,3] $ repetition code \cite{Gottesman1996-QECSQHB, Gottesman1997-phdQEC, NielsenChuang2004-QCQI} in the Schmidt vectors on Alice's system: \begin{equation} |\phi_{A\rfloor B}^{(A)}(0)\rangle=|000\rangle \text{ and } |\phi_{A\rfloor B}^{(A)}(\{4\})\rangle=|111\rangle. \end{equation} These codes have the Hamming distance $3$. Thus, a single Z-error can be corrected. After a measurement in the X-basis, Alice can therefore correct her result before sending it to Bob. In this approach, Bob will gain the correct acknowledgment of his maximally entangled state after Alice's measurement with confidence. Although the repetition code cannot correct phase errors (the X-errors in X-measurements), it is already sufficient for our task, since a phase error on Alice's side does not change the measurement outcomes. \par This application may be useful for quantum repeaters \cite{BriegelDCZoller1998-QRepeater}. The parties $B_1$ and $B_2$ can be at a large distance, such that they are not able to directly create an entangled state between them. In this case, they need the help from Alice as a repeater station to project the entanglement onto $B_1$ and $B_2$. \section{Conclusions} In this paper, we discussed properties of the representation of graph states in the computational X-basis. We introduced the framework of X-resources and correlation indices and linked them to the binary representation of graph states. A special type of X-resources was defined as X-chains: an X-chain is a subset of vertices for a given graph, such that the product of the stabilizer generators associated with these vertices contains only $\sigma_X$-Pauli operators. The set of X-chains of a graph state is a group, which can be calculated efficiently \cite{WuKampermannBruss2015-XChainAlgo}. The X-chain groups revealed structures of graph states and showed how to distinguish them by local $\sigma_X$ measurements. We introduced X-chain factorization (Lemma \ref{lemma::X-chain_factorized_group}, \ref{theorem::graph_states_as_X-factorized_states}) for deriving the representation of graph states in the X-basis, and it was shown that a graph state can be represented as superposition of all X-chain states (Theorem \ref{theorem::graph_states_as_X-factorized_states}). This approach was illustrated in the so-called factorization diagram (Algorithm \ref{algo::factorization_diagram_graph_states}). The larger the X-chain group is, the fewer X-chain states are needed for representing the graph state. \par We demonstrated various applications of the X-chain factorization. An important application is its usefulness for efficiently determining the overlap of two graph states (Corollary \ref{coro::balanced_and_orthogonal_graph_states}) using our algorithm. \par Further, we generalized the X-chain factorization approach such that it allows to find the Schmidt decomposition of graph states, which is the superposition of appropriately selected correlation states (Theorem \ref{theorem::Schmidt_decomposition_in_AB-correlation_states}, Algorithm \ref{algo::factorization_diagram_AB_correlation} and MATHEMATICA package in the Supplemental Material \cite{XchainMpackage_Wu}). \par Further benefits of the X-chain factorization are error correction procedures in entanglement localization of graph states in bipartite systems. This could be useful for quantum repeaters \cite{BriegelDCZoller1998-QRepeater}. \par The results of this paper can be extended to general multipartite graph states, e.g. weighted graph states \cite{DurHHLBriegel2005-EntSpinChain, HartmannCDB2007-04} and hypergraph states \cite{RossiHBMacchiavello2013-11,QuWLBao2013-02,GuhneCSMRBKM2014}. Another possible extension of these results is to consider the representation of graph states in a hybrid basis, i.e. for a subset of the qubits one adopts the X-basis, while for the other parties one uses the Z-basis. The graph state in such a hybrid basis can even have a simpler representation (i.e. a smaller number of terms in the superposition) than the one obtained by X-chain factorization. Besides, in \cite{HeinEisertBriegel2004-06,GraphStateReviews2006,CosentinoSimone2009-Weight,HajdusekMurao2013-Direct}, various multipartite entanglement measures for graph states were studied. We expect that the approach of X-chain factorization may also be useful in these cases. \begin{acknowledgments} This work was financially supported by the BMBF (Germany). We thank Michael Epping, Mio Murao and Yuki Mori for inspiration and useful discussions. \end{acknowledgments}
{ "timestamp": "2015-07-24T02:10:07", "yymm": "1504", "arxiv_id": "1504.03302", "language": "en", "url": "https://arxiv.org/abs/1504.03302" }
\section{introduction} \label{1} Alternative theories of gravity (general relativity) have been formulated and investigated in different contexts. The most former theory in this ground is known as Brans-Dicke scalar-tensor gravity in which the coupling of the scalar field to the geometry is nonstandard \cite{Brans} so that the gravitational coupling turns out to be no longer constant. Since then, more general couplings were considered and the compatibility of such approaches with the different formulations of equivalence principle have been studied \cite{Dicke-Penrose}-\cite{Dicke-Penrose11}. By conformal transformations it is possible to show that in the absence of ordinary matter, any scalar-tensor theory within the Jordan approach \cite{Jordan}, \cite{Jordan1} is conformally equivalent to an Einstein theory within the Einstein approach, plus a minimally coupled scalar field \cite{Capozziello}, \cite{Capozziello1}. Here, we show that such a conformal transformation when applied to the nonstandard Brans-Dicke action with the special parameter $\omega=-\frac{3}{2}$ leads to the O'Hanlon type theory \cite{Ohanlon} where the kinetic term is removed, i.e. the dynamics is completely endowed by the self interacting potential. On the other hand, it is generally shown that the Noether symmetry is preserved under the conformal transformation \cite{Capozziello}, \cite{Capozziello1}. Therefore, if we find that the Noether symmetry exists in such Brans-Dicke action, then we may conclude that the Noether symmetry exists in O'Hanlon action, too. Fortunately, such a Noether symmetry has already been obtained in Brans-Dicke action for a specific scalar field potential \cite{Roshan}. Hence, we use this potential in Brans-Dicke action and obtain the corresponding potential which respects the Noether symmetry in the O'Hanlon action. Our motivation for this procedure is the fact that O'Hanlon action has no dynamical term, so it is not easy to apply the Noether symmetry approach directly to this action. \\ In Sec. (\ref{2}) we discuss the conformal symmetry between Jordan and Einstein frames. In Sec. (\ref{3}) we introduce the conformal transformation which transforms the Brans-Dicke action with $\omega=-\frac{3}{2}$ into O'Hanlon action, and in Sec. (\ref{4}) we apply the Noether symmetry method to obtain the self interacting scalar fields which preserve the Noether symmetry in both theories. Conclusions are given in Sec. (\ref{5}). \section{Conformal symmetry between Jordan and Einstein frames} \label{2} The general form of the action in four dimensions for a nonstandard coupling between the scalar field and the geometry is given by \begin{equation}\label{eq01} {\cal S}=\int d^4x \sqrt{-g}\left(F(\phi)R+\frac{1}{2}g^{\mu \nu}\phi_{;\mu}\phi_{;\nu}-V(\phi)\right), \end{equation} where $R$ is the Ricci scalar, $V(\phi)$ and $F(\phi)$ are typical functions describing the potential for the scalar field $\phi$ and the coupling of $\phi$ with gravity, respectively\footnote{The metric signature is $(-+++)$ and Planck units are used.}. This form of the action or the Lagrangian density is usually referred to the {\it Jordan frame}, because of the coupling term term $F(\phi)R$. The variation with respect to the metric $g_{\mu \nu}$ gives rise to the generalized Einstein equations \begin{equation}\label{eq02} F(\phi)G_{\mu \nu}=-\frac{1}{2}T_{\mu \nu}-g_{\mu \nu}\square_{\Gamma}F(\phi)+F(\phi)_{;\mu\nu}, \end{equation} where $\square_{\Gamma}$ is the d'Alembert operator with respect to the connection $\Gamma$, $G_{\mu \nu}$ is the standard Einstein tensor \begin{equation}\label{eq03} G_{\mu \nu}=R_{\mu \nu}-\frac{1}{2}R g_{\mu \nu}, \end{equation} and $T_{\mu \nu}$ is the energy-momentum tensor of the scalar field \begin{equation}\label{eq04} T_{\mu \nu}=\phi_{;\mu}\phi_{;\nu}-\frac{1}{2}g_{\mu \nu}\phi_{;\alpha}\phi^{;\alpha}+g_{\mu \nu}V(\phi). \end{equation} The variation with respect to $\phi$ leads to Klein-Gordon equation \begin{equation}\label{eq05} \square_{\Gamma}\phi-RF_{\phi}(\phi)+V_{\phi}(\phi)=0, \end{equation} where $F_{\phi}=\frac{dF(\phi)}{d\phi}, V_{\phi}(\phi)=\frac{dV(\phi)}{d\phi} $. We now consider a conformal transformation on the metric $g_{\mu \nu}$ \begin{equation}\label{eq06} \bar{g}_{\mu \nu}=e^{2\Omega}g_{\mu \nu}, \end{equation} where $\Omega$ is an arbitrary function of spacetime. The Riemann and Ricci tensors together with the connection and Ricci scalar transform under this conformal transformation so that the Lagrangian density in (\ref{eq01}) becomes \bea \label{eq07} \sqrt{-g}\left(FR+\frac{1}{2}g^{\mu \nu}\phi_{;\mu}\phi_{;\nu}-V(\phi)\right)=\sqrt{-\bar{g}}e^{-2\Omega}\left(F\bar{R}+ \right.\nonumber \\ \left.-6F\square_{\bar{\Gamma}}\Omega-6F\Omega_{;\alpha}\Omega^{;\alpha}+\frac{1}{2}\bar{g}^{\mu \nu}\phi_{;\mu}\phi_{;\nu}-e^{-2\Omega}V(\phi)\right), \enea where $\bar{R}, \bar{\Gamma}$ and $\square_{\bar{\Gamma}}$ are the corresponding quantities with respect to the metric $\bar{g}_{\mu \nu}$ and connection $\bar{\Gamma}$, respectively. If we require the new theory in terms of $\bar{g}_{\mu \nu}$ to appear as a standard Einstein theory the conformal factor has to be related to $F$ as \begin{equation}\label{eq08} e^{2\Omega}=2F. \end{equation} Using this relation, the Lagrangian density (\ref{eq07}) becomes \bea\label{eq09} &&\sqrt{-g}\left(FR+\frac{1}{2}g^{\mu \nu}\phi_{;\mu}\phi_{;\nu}-V(\phi)\right)\\ \nonumber &=&\sqrt{-\bar{g}}\left(\frac{1}{2}\bar{R}+3\square_{\bar{\Gamma}}\Omega+ \frac{3F_{\phi}^2-F}{4F^2}\phi_{;\alpha}\phi^{;\alpha}-\frac{V(\phi)}{4F^2}\right). \enea By introducing a new scalar field $\bar{\phi}$ and the potential $\bar{V}$, respectively defined by \begin{equation}\label{eq10} \bar{\phi}_{;\alpha}=\sqrt{\frac{3F_{\phi}^2-F}{4F^2}}\phi_{;\alpha}, \:\:\:\:\: \bar{V}(\bar{\phi}(\phi))=\frac{V(\phi)}{4F^2}, \end{equation} we obtain\footnote{Note that the divergence-type term $3\square_{\bar{\Gamma}}\Omega$ appearing in the Lagrangian density is not considered \cite{Capozziello}, \cite{Capozziello1}.} \bea \label{eq11} &&\sqrt{-g}\left(FR+\frac{1}{2}g^{\mu \nu}\phi_{;\mu}\phi_{;\nu}-V(\phi)\right)\nonumber \\ &=&\sqrt{-\bar{g}}\left(\frac{1}{2}\bar{R}+\frac{1}{2}\bar{\phi}_{;\alpha}\bar{\phi}^{;\alpha}-\bar{V}(\bar{\phi})\right), \enea where the r.h.s. is the usual Einstein-Hilbert Lagrangian density subject to the metric $\bar{g}_{\mu \nu}$, plus the standard Lagrangian density of the scalar field $\bar{\phi}$. This form of the Lagrangian density is usually referred to the {\it Einstein frame}. Therefore, we realize that any nonstandard coupled theory of gravity with scalar field, in the absence of ordinary matter, is conformally equivalent to the standard Einstein gravity coupled with scalar field provided that we use the conformal transformation (\ref{eq08}) together with the definitions (\ref{eq10}). The converse is also true: for a given $F(\phi)$, such that ${3F_{\phi}^2-F}>0$, we can transform a standard Einstein theory into a nonstandard coupled theory. This has an important meaning: if we are able to solve the field equations within the framework of standard Einstein gravity coupled with an scalar field subject to a given potential, we should be able to get the solutions for the class of nonstandard coupled theories, with the coupling $F(\phi)$, through the conformal transformation and the definitions defined by (\ref{eq08}), and (\ref{eq10}), respectively. This statement is exactly what we mean as the {\it conformal equivalence between Jordan and Einstein frames}. \section{Conformal symmetry between Brans-Dicke action with $\omega=-\frac{3}{2}$ and O'Hanlon action}\label{3} The Brans-Dicke action in Jordan frame with $\omega=-\frac{3}{2}$ is defined by the action \begin{equation}\label{eq27} {\cal S}=\int_P d^4x \sqrt{-g}\left(\phi R+\frac{3}{2\phi}\phi_{;\mu} \phi^{;\mu}-V(\phi)\right). \end{equation} We are motivated for choosing this action with the specific $\omega$ because it is known that the Brans-Dicke action in Jordan frame with $\omega=-\frac{3}{2}$ is equivalent to the $f({\cal R})$ gravity in Palatini formalism, where ${\cal R}$ is the Ricci scalar \cite{Sotiriou}. Moreover, the viable $f({\cal R})$ theories of gravity in Palatini formalism were found by searching for the Noether symmetry within the dynamically equivalent action, namely the Brans-Dicke action in Jordan frame with $\omega=-\frac{3}{2}$, with some viable scalar field potentials \cite{Roshan}. Here, by the study of conformal equivalence between the Brans-Dicke action in Jordan frame with $\omega=-\frac{3}{2}$ and O'Hanlon action in Einstein frame we are indeed looking for those O'Hanlon actions in Einstein frame which are viable regarding the viable $f({\cal R})$ theories of gravity in Palatini formalism, from Noether symmetry point of view. In this way, for any viable $f({\cal R})$ theory of gravity from Noether symmetry point of view, we may find the corresponding O'Hanlon actions in Einstein frame. By redefining the scalar field $\phi$ to a new field \begin{equation}\label{eq28} \sigma=2\sqrt{3\phi}, \end{equation} the Brans-Dicke action (\ref{eq27}) becomes \begin{equation}\label{eq29} {\cal S}=\int_P d^4x \sqrt{-g}\left(F(\sigma) R+\frac{1}{2}g^{\mu \nu}\sigma_{;\mu}\sigma_{;\nu}-V(\sigma)\right), \end{equation} where \begin{equation}\label{eq30} F(\sigma)=\frac{1}{12}\sigma^2. \end{equation} This action is now exactly the same as (\ref{eq01}) in the Jordan frame in which $\phi$ is replaced by $\sigma$. Therefore, with a similar procedure for the field $\sigma$ we may write down \bea\label{eq31} &&\sqrt{-g}\left(F(\sigma)R+\frac{1}{2}g^{\mu \nu}\sigma_{;\mu}\sigma_{;\nu}-V(\sigma)\right)\nonumber \\ &=&\sqrt{-\bar{g}}\left(\frac{1}{2}\bar{R}+\frac{1}{2}\bar{\sigma}_{;\alpha}\bar{\sigma}^{;\alpha} -\bar{V}\right), \enea where \begin{equation}\label{eq32} \bar{\sigma}_{;\alpha}=\sqrt{\frac{3F_{\sigma}^2-F}{4F^2}}\sigma_{;\alpha}, \:\:\:\:\: \bar{V}(\bar{\sigma}(\sigma))=\frac{V(\sigma)}{4F^2}, \end{equation} and \beq F_\sigma=\frac{dF(\sigma)}{d\sigma}. \eneq Substituting $F(\sigma)$ (\ref{eq30}) in the definition of $\bar{\sigma}_{;\alpha}$ leads to zero kinetic term for this field and we obtain \bea\label{eq33} \sqrt{-\bar{g}}\left(\frac{1}{2}\bar{R}-\bar{V}\right). \enea The r.h.s. of eq. (\ref{eq33}) is the Lagrangian density in the Einstein frame \footnote{ It is interesting to note that for the following potential \begin{equation}\label{eq34} V(\sigma)=\frac{\bar{\Lambda}}{36}\sigma^4, \end{equation} where $\bar{\Lambda}$ is a constant, we obtain $\bar{V}=\bar{\Lambda}$ and the action in Einstein frame is reduced to Einstein-Hilbert action with a cosmological constant $\bar{\Lambda}$. The corresponding potential in the Jordan frame with Brans-Dicke action (\ref{eq27}) takes the following form \begin{equation}\label{eq35} V(\phi)=4\bar{\Lambda}\phi^2, \end{equation} which converts the action into a gravity theory non-minimally coupled with a massive scalar field with an squared mass scale of the order of cosmological constant.}, namely it becomes the O'Hanlon action where the dynamics is completely endowed by the self interacting potential \cite{Ohanlon}. Therefore, it is shown that the Brans-Dicke action in Jordan frame with the parameter $\omega=-\frac{3}{2}$ and O'Hanlon action in Einstein frame are conformally equivalent. \section{Noether symmetry in Brans-Dicke action with $\omega=-\frac{3}{2}$}\label{4} Using the flat Friedmann-Robertson-Walker metric the Lagrangian related to the action (\ref{eq27}) takes the point-like form \begin{eqnarray}\label{eq36} {\cal L}=12a^{2}\varphi \dot{\varphi} \dot{a}+6\varphi^{2}\dot{a}^{2}a+6a^{3}\dot{\varphi}^{2}-V(\varphi)a^{3}, \end{eqnarray} where the redefinition $\phi\equiv\varphi^{2}$ has been used. Solutions for the dynamics given by (\ref{eq36}) can be achieved by selecting cyclic variables related to some Noether symmetry \cite{defelice, lambiase}. In principle, this approach allows us to select gravity models compatible with the Noether symmetry. Let $\mathcal{L}(q^i,\dot{q}^i)$ be a canonical, non degenerate point-like Lagrangian subject to \beq \frac{\partial\mathcal{L}}{\partial t}=0, \hspace{1.5cm} det H_{ij}\equiv \left\|\frac{\partial^2\mathcal{L}}{\partial \dot{q}^i\partial \dot{q}^j}\right\|\neq0, \eneq where $H_{ij}$ is the Hessian matrix and a dot denotes derivative with respect to the cosmic time $\textit{t}$. The Lagrangian $\mathcal{L}$ is generally of the form \beq \mathcal{L}=T(\textbf{q},\dot{\textbf{q}})-V(\textbf{q}), \eneq where \textit{T} and \textit{V} are kinetic energy (with positive definite quadratic form) and potential energy, respectively. The energy function associated with $\mathcal{L}$ is defined by \beq E_\mathcal{L}\equiv\frac{\partial\mathcal{L}}{\partial \dot{q}^i}-\mathcal{L}, \eneq which is the total energy $T + V$ as a constant of motion. Since our cosmological problem has a finite number of degrees of freedom, we consider only point transformations. Any invertible transformation of the generalized positions $Q^i=Q^i(\textbf{q})$ induces a transformation of the generalized velocities \beq \dot{Q}^i(\textbf{q})=\frac{\partial Q^i}{\partial q^j}\dot{q}^j, \label{4.23} \eneq where the matrix $\mathcal{J}=\left\|\partial Q^i/\partial q^j\right\|$ is the Jacobian of the transformation, and it is assumed to be non-zero. On the other hand, an infinitesimal point transformation is represented by a generic vector field on $Q$ \beq \textbf{X}=\alpha^i(\textbf{q})\frac{\partial}{\partial q^i}. \eneq \\ The induced transformation (\ref{4.23}) is then represented by \beq \textbf{X}^c=\alpha^i\frac{\partial}{\partial q^i}+\left(\frac{d}{dt}\alpha^i\right)\frac{\partial}{\partial \dot{q}^i}. \label{4.24} \eneq The Lagrangian $\mathcal{L}(\textbf{q}, \dot{\textbf{q}})$ is invariant under the transformation by \textbf{X} provided that \beq L_X\mathcal{L}\equiv\alpha^i\frac{\partial \mathcal{L}}{\partial q^i}+\left(\frac{d}{dt}\alpha^i\right)\frac{\partial}{\partial \dot{q}^i}\mathcal{L}=0, \eneq where $L_X\mathcal{L}$ is the Lie derivative of ${\mathcal{L}}$. Let us now consider the Lagrangian $\mathcal{L}$ and its Euler-Lagrange equations \beq \frac{d}{dt}\frac{\partial\mathcal{L}}{\partial \dot{q}^j}-\frac{\partial\mathcal{L}}{\partial q^j}=0. \label{4.25} \eneq Contracting (\ref{4.25}) with $\alpha^i$ gives \beq \alpha^j\left(\frac{d}{dt}\frac{\partial\mathcal{L}}{\partial \dot{q}^j}\right)=\alpha^j\left(\frac{\partial\mathcal{L}}{\partial q^j}\right). \label{4.26} \eneq On the other hand, we can write \beq \label{4.26'} \frac{d}{dt}\left(\alpha^j\frac{\partial\mathcal{L}}{\partial \dot{q}^j}\right)=\alpha^j\left(\frac{d}{dt}\frac{\partial\mathcal{L}}{\partial \dot{q}^j}\right)+\left(\frac{d\alpha^j}{dt}\right)\frac{\partial\mathcal{L}}{\partial \dot{q}^j}, \eneq in which the first term in the RHS can be replaced by the RHS of (\ref{4.26}), hence (\ref{4.26'}) results in \beq \frac{d}{dt}\left(\alpha^j\frac{\partial\mathcal{L}}{\partial \dot{q}^j}\right)=L_X\mathcal{L}. \eneq The immediate consequence of this result is the \textit{Noether theorem} which states: if $L_X\mathcal{L}=0$, then the function \beq \Sigma_0=\alpha^k\frac{\partial\mathcal{L}}{\partial \dot{q}^k}, \label{4.27} \eneq is a constant of motion. In the present model of scalar-tensor cosmology, the Lagrangian is defined by (\ref{eq36}), and the generator of the symmetry corresponding to this Lagrangian is given by \beq \textbf{X}=A\frac{\partial}{\partial a}+B\frac{\partial}{\partial \phi}+\dot{A}\frac{\partial}{\partial \dot{a}}+\dot{B}\frac{\partial}{\partial \dot{\phi}}. \label{4.28} \eneq The Noether symmetry exists if the equation $L_X\mathcal{L}=0$ has solution for the Killing vector $X$. In other words, a symmetry exists if at least one of the functions $A$, or $B$ in the equation (\ref{4.28}) is different from zero. The existence condition for the symmetry leads to the following system of partial differential equations \cite{Roshan} \begin{eqnarray}\label{4.39} 2\varphi A+aB+\varphi^{2}\frac{\partial A}{\partial\varphi}+a\varphi\frac{\partial A}{\partial a}+a^{2}\frac{\partial B}{\partial a}+a\varphi\frac{\partial B}{\partial \varphi}=0, \end{eqnarray} \begin{eqnarray}\label{4.40} \varphi A+2aB+2a\varphi\frac{\partial A}{\partial a}+2a^{2}\frac{\partial B}{\partial a}=0, \end{eqnarray} \begin{eqnarray}\label{4.41} 3A+2\varphi\frac{\partial A}{\partial\varphi}+2a\frac{\partial B}{\partial\varphi}=0, \end{eqnarray} \begin{eqnarray}\label{4.42} 3a^{2}V(\varphi)A+B\frac{dV}{d\varphi}a^{3}=0. \end{eqnarray} From Eq.(\ref{4.42}) we have \begin{eqnarray}\label{4.43} A=\left[-\frac{V'(\varphi)}{3V(\varphi)}\right]Ba, \end{eqnarray} where $'$ denotes derivative with respect to $\phi$. Substituting (\ref{4.43}) into (\ref{4.40}), we find that $A=f(\varphi)a^{n}$ and \beq \frac{V'}{3V}=\frac{2n}{1+2n} \varphi^{-1}. \eneq By substituting these results in (\ref{4.41}) we obtain \begin{eqnarray}\label{4.44} f(\varphi)=\beta \varphi^{n-1} \end{eqnarray} where $\beta$ is a constant. These results satisfy Eq.(\ref{4.39}) for any arbitrary $n$. From Eqs.(\ref{4.43}) and (\ref{4.44}) one obtains \cite{Roshan} \begin{eqnarray}\label{4.45} A=\beta a^{n}\varphi^{n-1}, \end{eqnarray} \begin{eqnarray}\label{4.45'} B=-\frac{(2n+1)\beta}{2n}a^{n-1}\varphi^{n} \end{eqnarray} \begin{eqnarray}\label{4.46} V(\varphi)=\lambda \varphi^{\frac{6n}{1+2n}}, \end{eqnarray} or \begin{eqnarray}\label{4.46'} V(\phi)=\lambda \phi^{\frac{3n}{1+2n}}, \end{eqnarray} where $\lambda$ is a constant. In conclusion, the Noether symmetry for the Lagrangian (\ref{eq36}) with the potential (\ref{4.46}) exists and the associated vector field $X$ is determined by (\ref{4.45}) and (\ref{4.45'}) provided that $n\neq 0, -1/2$. \vspace{10mm} \section{Noether symmetry in O'Hanlon action }\label{5} If we apply the Noether symmetry approach for O'Hanlon action we realize that the corresponding Lagrangian is degenerate and this cause a serious problem because the symmetric vector field $X$ looses one degree of freedom related to the velocity of the scalar field. To overcome this problem, we remind that according to \cite{Capozziello}, \cite{Capozziello1}, Noether symmetries are conformally preserved in a general way. Bearing this in mind, we note that the Noether symmetry in the action (\ref{eq27}) with the potential (\ref{4.46'}) is preserved under a conformal transformation into O'Hanlon action (\ref{eq33}). Therefore, O'Hanlon action represents the Noether symmetry provided that we have the following class of potentials \begin{equation}\label{4.47} \bar{V}(\sigma)={3\lambda}\sigma^{-\frac{2n+4}{2n+1}},\,\,\,\,\,n\neq 0, -{1}/{2}, \end{equation} where use has been made of Eqs.(\ref{eq28}), (\ref{eq30}), (\ref{eq32}) and (\ref{4.46'}). \section*{Acknowledgment} This work has been supported financially by Research Institute for Astronomy and Astrophysics of Maragha (RIAAM) under research project NO.1/1751.
{ "timestamp": "2015-04-15T02:04:21", "yymm": "1504", "arxiv_id": "1504.03416", "language": "en", "url": "https://arxiv.org/abs/1504.03416" }
\section{Introduction} In this paper we start a study of Morse Theory on Banach spaces using the theory of Ultrafunctions \cite{belu2013,milano,beyond,topology}; the \textbf ultrafunctions} are a new notion of generalized functions based on the general ideas of Non Archimedean Mathematics (NAM) of Non Standard Analysis (NSA). Based on our experience NAM allows to construct models of the physical world in a more elegant and simpler way, in many circumstances. Contrary to the common belief, the ideas behind NSA and NMA date backs to the years around 1870's, when it was investigated by mathematicians such as Du Bois-Reymond, Veronese, Hilbert and Levi-Civita. Since then its development stopped, until the '60s when Abraham Robinson presented his Non Standard Analysis (NSA). For a historical analysis of these facts we refer to Ehrlich \cite{el06} and to Keisler \cite{keisler76} for a very clear exposition of NSA. Ultrafunctions are a particular class of functions based on a superreal field $\mathbb{R}^{\ast }\supset \mathbb{R}$. More exactly, to any continuous function $f:\mathbb{R}^{N}\rightarrow \mathbb{R}$, we associate in a canonical way an ultrafunction $\widetilde{f}:\left( \mathbb{R}^{\ast }\right) ^{N}\rightarrow \mathbb{R}^{\ast }$ which extends $f$; the ultrafunctions are many more than the functions and among them we can find solutions of functional equations which do not have any solutions among the real functions or the distributions. Also, the theory of ultrafuctions allows to overcome some difficulties of Morse Theory in Banach spaces. Many authors have been working on the adaptation of Morse Theory on Banach spaces \cite{Chang83,Chang93,Chang98,MerPalm,Uhl72}, but many problems arise: a really important one is the difficulty in defining what a (weakly) nondegenerate critical point is and how to define its Morse index, since any critical point of a $C^{2}$ functional on a Banach space is degenerate and it is not possible to apply the generalized Morse Lemma (for a reference on the generalized Morse Lemma see \cite{GroMe}). In recent times, a lot of delicate work has been done in this direction, developing extremely refined tools and techniques to study problems in nonlinear analysis \cit {CinCarMartVan,CD05,CinDe09,CinDeSciCGpLap,CLV05,CV03,CV06,CV07,CV09,CV09b,CinVanVis11,CinVanVisMul,Lan . Our approach is totally different, we avoid many of the difficulties involved in the definitions by using the properties of hyperfinite function spaces. We believe that the flexibility of the ultrafunction approach can be fruitful for the development of the Theory. In this paper we present a foundational basis for this theory; other articles dealing with applications are to follow. \bigskip \begin{acknowledgement} The first author wishes to thank the Federal University of Rio de Janeiro (UFRJ) for the invitation and hospitality. \end{acknowledgement} \subsection{Notation} We fix some notation. Since this paper does not deal with application, we use some function spaces as model spaces for the theory; let $\Omega $\ be a subset of $\mathbb{R}^{N}$: \begin{itemize} \item $\mathcal{C}\left( \Omega \right) $ denotes the set of real continuous functions defined on $\Omega$; \item $\mathcal{C}_{0}\left( \overline{\Omega }\right) $ denotes the set of real continuous functions on $\overline{\Omega }$ which vanish on $\partial \Omega$; \item $\mathcal{C}^{k}\left( \Omega \right) $ denotes the set of functions defined on $\Omega \subset \mathbb{R}^{N}$ which have continuous derivatives up to the order $k$; \item $\mathcal{C}_{0}^{k}\left( \overline{\Omega }\right) =\mathcal{C ^{k}\left( \overline{\Omega }\right) \cap \mathcal{C}_{0}\left( \overline \Omega }\right) $; \item $\mathcal{D}\left( \Omega \right) $ denotes the set of the infinitely differentiable functions with compact support defined on $\Omega \subset \mathbb{R}^{N}$; \item $L^2\left( \Omega \right) $ denotes the set of square integrable functions on $\Omega$. \end{itemize} \section{Preliminary notions} In this section we present some background material necessary to follow the following part. We underline that this material is not original but we cite it in order to make the article (almost) self contained. We refer to \cit {belu2013,milano,beyond,topology} for a more detailed treatment. \subsection{Non Archimedean Fields\label{naf}} Here, we recall the basic definitions and facts regarding non-Archimedean fields. In the following, ${\mathbb{K}}$ will denote an ordered field. We recall that such a field contains (a copy of) the rational numbers. Its elements will be called numbers. \begin{definition} Let $\mathbb{K}$ be an ordered field. Let $\xi \in \mathbb{K}$. We say that: \begin{itemize} \item $\xi $ is infinitesimal if, for all positive $n\in \mathbb{N}$, $|\xi |<\frac{1}{n}$; \item $\xi $ is finite if there exists $n\in \mathbb{N}$ such as $|\xi |<n$; \item $\xi $ is infinite if, for all $n\in \mathbb{N}$, $|\xi |>n$ (equivalently, if $\xi $ is not finite). \end{itemize} \end{definition} \begin{definition} An ordered field $\mathbb{K}$ is called Non-Archimedean if it contains an infinitesimal $\xi \neq 0$. \end{definition} It's easily seen that all infinitesimal are finite, that the inverse of an infinite number is a nonzero infinitesimal number, and that the inverse of a nonzero infinitesimal number is infinite. \begin{definition} A superreal field is an ordered field $\mathbb{K}$ that properly extends \mathbb{R}$. \end{definition} It is easy to show, due to the completeness of $\mathbb{R}$, that there are nonzero infinitesimal numbers and infinite numbers in any superreal field. Infinitesimal numbers can be used to formalize a new notion of ``closeness'': \begin{definition} \label{def infinite closeness} We say that two numbers $\xi, \zeta \in \mathbb{K}}$ are infinitely close if $\xi -\zeta $ is infinitesimal. In this case, we write $\xi \sim \zeta $. \end{definition} Clearly, the relation ``$\sim $'' of infinite closeness is an equivalence relation. \begin{theorem} If $\mathbb{K}$ is a superreal field, every finite number $\xi \in \mathbb{K} $ is infinitely close to a unique real number $r\sim \xi $, called the \textbf{shadow} or the \textbf{standard part} of $\xi $. \end{theorem} Given a finite number $\xi $, we denote its shadow as $sh(\xi )$, and we put $sh(\xi )=+\infty $ ($sh(\xi )=-\infty $) if $\xi \in \mathbb{K}$ is a positive (negative) infinite number.\newline \begin{definition} Let $\mathbb{K}$ be a superreal field, and $\xi \in \mathbb{K}$ a number. The \label{def monad} monad of $\xi $ is the set of all numbers that are infinitely close to it \begin{equation*} \mathfrak{m}\mathfrak{o}\mathfrak{n}(\xi )=\{\zeta \in \mathbb{K}:\xi \sim \zeta \}, \end{equation* and the galaxy of $\xi $ is the set of all numbers that are finitely close to it: \begin{equation*} \mathfrak{gal}(\xi )=\{\zeta \in \mathbb{K}:\xi -\zeta \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{is\ finite}\} \end{equation*} \end{definition} By definition, it follows that the set of infinitesimal numbers is \mathfrak{mon}(0)$ and that the set of finite numbers is $\mathfrak{gal}(0)$. \subsection{The $\Lambda $-limit\label{OL}} In this section we will introduce a particular superreal field $\mathbb{K}$ and we will analyze its main properties by means of $\Lambda $-theory, in particular by means of the notion of $\Lambda $-limit (for complete proofs and for further properties of the $\Lambda $-limit, the reader is referred to \cite{ultra,belu2013,milano,beyond,topology}). We recall that the superstructure on $\mathbb{R}$ is defined as follows: \begin{equation*} \mathbb{U}=\dbigcup_{n=0}^{\infty }\mathbb{U}_{n} \end{equation* where $\mathbb{U}_{n}$ is defined by induction as follows \begin{eqnarray*} \mathbb{U}_{0} &=&\mathbb{R}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{;} \\ \mathbb{U}_{n+1} &=&\mathbb{U}_{n}\cup \mathcal{P}\left( \mathbb{U _{n}\right) . \end{eqnarray* Here $\mathcal{P}\left( E\right) $ denotes the power set of $E.$ Identifying the couples with the Kuratowski pairs and the functions and the relations with their graphs, it follows that{\ }$\mathbb{U}$ contains almost every usual mathematical object. Now, we set \begin{equation*} \mathfrak{L}=\mathcal{P}_{\omega }(\mathbb{U}), \end{equation*} and we will refer to $\mathfrak{L}$ as the \textquotedblleft parameter space\textquotedblright . Clearly $\left( \mathfrak{L},\subset \right) $ is a directed set\footnote We recall that a directed set is a partially ordered set $(D,\prec )$ such that, $\forall a,b\in D,\ \exists c\in D$ such that \begin{equation*} a\prec c\ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\ \ b\prec c. \end{equation* }. We add at $\mathfrak{L}$ one point at infinity $\Lambda $ and we define the following family of neighborhoods of infinity \begin{equation*} \left\{ \Lambda \cup Q\ |\ Q\in \mathcal{U}\right\} \end{equation* where $\mathcal{U}$ is a fine ultrafilter on $\mathfrak{L,}$ namely it is a filter such that \begin{itemize} \item if $A\cup B=\mathfrak{L}$, then \begin{equation} A\in \mathfrak{\mathcal{U}\ }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{or }B\in \mathcal{U}; \label{quaqua} \end{equation} \item $\forall \lambda _{0}\in \mathfrak{L}$, $\left\{ \lambda \in \mathfrak L}\ |\ \lambda _{0}\subset \lambda \right\} \in \mathcal{U}$ \end{itemize} A function $\varphi :D\rightarrow E$ defined on a directed set will be called \textit{net }(with values in $E$). If $\varphi _{\lambda }$ is a real net, we have that \begin{equation*} \underset{\lambda \rightarrow \Lambda }{\lim }\varphi _{\lambda }=L \end{equation* if and only if \begin{equation} \forall \varepsilon >0,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\exists Q\in \mathcal{U}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ such that, \forall \lambda \in Q,\ \left\vert \varphi _{\lambda }-L\right\vert <\varepsilon . \label{limite} \end{equation We will refer to the sets in $Q$ as \textbf{qualified sets}. Notice that this topology on $\mathfrak{L}\cup \left\{ \Lambda \right\} $ satisfies this interesting property: \begin{proposition} \label{nino}If the net $\varphi _{\lambda }$ has a converging subnet, then it is a \textbf{converging} net. \end{proposition} \textbf{Proof}: Suppose that the net $\varphi _{\lambda }$ has a converging subnet to $L\in \mathbb{R}$. We fix $\varepsilon >0$ arbitrarily and we have to prove that $Q_{\varepsilon }\in \mathcal{U}$ wher \begin{equation*} Q_{\varepsilon }=\left\{ \lambda \in \mathfrak{L}\ |\ \left\vert \varphi _{\lambda }-L\right\vert <\varepsilon \right\} . \end{equation* We argue indirectly and we assume that \begin{equation*} Q_{\varepsilon }\notin \mathcal{U} \end{equation* Then, by (\ref{quaqua}), $N=\mathfrak{L}\backslash \left( Q_{\varepsilon }\cap E\right) \in \mathcal{U}$ and henc \begin{equation*} \forall \lambda \in N,\ \left\vert \varphi _{\lambda }-L\right\vert \geq \varepsilon , \end{equation*} This contradict the fact that $\varphi _{\lambda }$ has a subnet which converges to $L.$ $\square $ \bigskip We have the following result: \begin{theorem} \label{nuovo}\textit{There exists a superreal field} $\mathbb{K}\supset \mathbb{R}$\textit{\ a Hausdorff topology on the space }$\left( \mathfrak{L \times \mathbb{R}\right) \cup \mathbb{K}$ \textit{such that } \begin{enumerate} \item \textit{Every net }$\varphi :\mathfrak{L}\times \mathbb{R}\rightarrow \mathbb{R}$\textit{\ has a unique limit } \begin{equation*} L=\lim_{\lambda \rightarrow \Lambda }\left( \lambda ,\varphi (\lambda )\right) . \end{equation* \textit{Moreover we assume that every}\emph{\ }$\xi \in \mathbb{K}$\textit{\ is the limit\ of some net }$\varphi :\mathfrak{L}\times \mathbb{R \rightarrow \mathbb{R}$\emph{.} \item If $r\in \mathbb{R}$ \begin{equation*} \lim_{\lambda \rightarrow \Lambda }\left( \lambda ,r\right) =r. \end{equation*} \item \emph{\ }\textit{For all }$\varphi ,\psi :\mathfrak{L}\rightarrow \mathbb{R}$\emph{: \begin{eqnarray*} \lim_{\lambda \rightarrow \Lambda }\left( \lambda ,\varphi (\lambda )\right) +\lim_{\lambda \rightarrow \Lambda }\left( \lambda ,\psi (\lambda )\right) &=&\lim_{\lambda \uparrow \Lambda }\left( \lambda ,\varphi (\lambda )+\psi (\lambda )\right) ; \\ \lim_{\lambda \rightarrow \Lambda }\left( \lambda ,\varphi (\lambda )\right) \cdot \lim_{\lambda \rightarrow \Lambda }\left( \lambda ,\psi (\lambda )\right) &=&\lim_{\lambda \rightarrow \Lambda }\left( \lambda ,\varphi (\lambda )\cdot \psi (\lambda )\right) . \end{eqnarray*} \end{enumerate} \end{theorem} \textbf{Idea of the proof:} The proof of this theorem is in \cite{topology}. We now will sketch it for the sake of the reader. We se \begin{equation*} I=\left\{ \varphi \in \mathfrak{F}\left( \mathfrak{L},\mathbb{R}\right) \ |\ \varphi (x)=0\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{in a qualified set}\right\} . \end{equation* It is not difficult to prove that $I$ is a maximal ideal in $\mathfrak{F \left( \mathfrak{L},\mathbb{R}\right) ;$ the \begin{equation*} \mathbb{K}:=\frac{\mathfrak{F}\left( \mathfrak{L},\mathbb{R}\right) }{I} \end{equation* is a field. In the following, we shall identify a real number $c\in \mathbb{ }$ with the equivalence class of the constant net $\left[ c\right] _{I}.$ Now, we equip $\left( \mathfrak{L}\times \mathbb{R}\right) \cup \mathbb{K}$ with the following topology $\tau $. A basis of neighborhoods of $\left[ \varphi \right] _{I}$ is given by \begin{equation*} N_{\varphi ,Q}:=\left\{ \left( \lambda ,\varphi (\lambda )\right) \mid \lambda \in Q\right\} \cup \left\{ \left[ \varphi \right] _{I}\right\} ,\ \ Q\in \mathcal{U}. \end{equation*} $\square $ From now on, in order to simplify the notation we will writ \begin{equation*} \lim_{\lambda \uparrow \Lambda }\varphi (\lambda ):=\lim_{\lambda \rightarrow \Lambda }\left( \lambda ,\varphi (\lambda )\right) , \end{equation* and we call it $\Lambda $-limit. \subsection{Natural extension of sets and functions} The notion of $\Lambda $-limit can be extended to sets and functions in the following way: \begin{definition} \label{limito}Let $E_{\lambda },$ $\lambda \in \mathfrak{L},$ be a family of sets in $\mathbb{R}^{N}.$ We pos \begin{equation*} \lim_{\lambda \uparrow \Lambda }\ E_{\lambda }:=\left\{ \lim_{\lambda \uparrow \Lambda }\psi (\lambda )\ |\ \psi (\lambda )\in E_{\lambda }\right\} . \end{equation* A set which is a $\Lambda $-\textit{limit\ is called \textbf{internal}.} In particular if, $\forall \lambda \in \mathfrak{L,}$ $E_{\lambda }=E,$ we set \lim_{\lambda \uparrow \Lambda }\ E_{\lambda }=E^{\ast },\ $namel \begin{equation*} E^{\ast }:=\left\{ \lim_{\lambda \uparrow \Lambda }\psi (\lambda )\ |\ \psi (\lambda )\in E\right\} . \end{equation* $E^{\ast }$ is called the \textbf{natural extension }of $E.$ \end{definition} Notice that, while the $\Lambda $-limit of a sequence of numbers with constant value $r\in\mathbb{R}$ is $r$, the $\Lambda$-limit of a constant sequence of sets with value $E\subseteq\mathbb{R}$ gives a larger set, namely $E^{\ast }$. In general, the inclusion $E\subseteq E^{\ast }$ is proper. This definition, combined with axiom ($\Lambda $-1$)$, entails that \begin{equation*} \mathbb{K}=\mathbb{R}^{\ast }. \end{equation*} Given any set $E,$ we can associate to it two sets: its natural extension E^{\ast }$ and the set $E^{\sigma },$ wher \begin{equation} E^{\sigma }=\left\{ x^{\ast }\ |\ x\in E\right\} . \label{sigmaS} \end{equation} Clearly $E^{\sigma }$ is a copy of $E;$ however it might be different as a set since, in general, $x^{\ast }\neq x.$ Moreover $E^{\sigma }\subset E^{\ast }$ since every element of $E^{\sigma }$ can be regarded as the \Lambda $-limit\ of a constant sequence. \begin{definition} \label{limito2}Let \begin{equation*} f_{\lambda }:\ E_{\lambda }\rightarrow \mathbb{R},\ \ \lambda \in \mathfrak{ }, \end{equation* be a family of functions. We define a functio \begin{equation*} f:\left( \lim_{\lambda \uparrow \Lambda }\ E_{\lambda }\right) \rightarrow \mathbb{R}^{\ast } \end{equation* as follows: for every $\xi \in \left( \lim_{\lambda \uparrow \Lambda }\ E_{\lambda }\right) $ we pos \begin{equation*} f\left( \xi \right) :=\lim_{\lambda \uparrow \Lambda }\ f_{\lambda }\left( \psi (\lambda )\right) , \end{equation* where $\psi (\lambda )$ is a net of numbers such that \begin{equation*} \psi (\lambda )\in E_{\lambda }\ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\ \ \lim_{\lambda \uparrow \Lambda }\psi (\lambda )=\xi . \end{equation* A function which is a $\Lambda $-\textit{limit\ is called \textbf{internal}.} In particular if, $\forall \lambda \in \mathfrak{L,}$ \begin{equation*} f_{\lambda }=f,\ \ \ \ f:\ E\rightarrow \mathbb{R}, \end{equation* we set \begin{equation*} f^{\ast }=\lim_{\lambda \uparrow \Lambda }\ f_{\lambda }. \end{equation* $f^{\ast }:E^{\ast }\rightarrow \mathbb{R}^{\ast }$ is called the \textbf natural extension }of $f.$ \end{definition} More in general, the $\Lambda $-limit can be extended to a larger family of nets. To this aim, let us consider a ne \begin{equation} \varphi :\mathfrak{L}\rightarrow {\mathbb{U}}_{n}. \label{carpa} \end{equation We will define $\lim\limits_{\lambda \uparrow \Lambda }\varphi (\lambda )$ by induction on $n$. For $n=0,$ $\lim\limits_{\lambda \uparrow \Lambda }\varphi (\lambda )$ is defined by Th. \ref{nuovo}; so by induction we may assume that the limit is defined for $n-1$ and we define it for the net (\re {carpa}) as follows \begin{equation} \lim_{\lambda \uparrow \Lambda }\varphi (\lambda )=\left\{ \lim_{\lambda \uparrow \Lambda }\psi (\lambda )\ |\ \psi :\mathfrak{L}\rightarrow \mathcal \mathbb{U}}_{n-1}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and}\ \forall \lambda \in \mathfrak{L},\ \psi (\lambda )\in \varphi (\lambda )\right\} . \label{limitu} \end{equation} \begin{definition} A mathematical entity (number, set, function or relation) which is the \Lambda $-limit of a net is called \textbf{internal}. \end{definition} Let us note that, if $\left( f_{\lambda }\right) $, $\left( E_{\lambda }\right) $ are, respectively, a net of functions and a net of sets, the \Lambda -$limit of these nets defined by \ref{limitu}) coincides with the \Lambda $-limit given by Definitions \ref{limito} and \ref{limito2}. The following theorem is a fundamental tool in using the $\Lambda $-limit: \begin{theorem} \label{limit}\textbf{(Leibniz Principle)} Let $\mathcal{R}$ be a relation in {$\mathbb{U}$}$_{n}$ for some $n\geq 0$ and let $\varphi $,$\psi :\mathfrak{ }\rightarrow {\mathbb{U}}_{n}$. If \begin{equation*} \forall \lambda \in \mathfrak{L},\ \varphi (\lambda )\mathcal{R}\psi (\lambda ) \end{equation* the \begin{equation*} \left( \underset{\lambda \uparrow \Lambda }{\lim }\varphi (\lambda )\right) \mathcal{R}^{\ast }\left( \underset{\lambda \uparrow \Lambda }{\lim }\psi (\lambda )\right) . \end{equation*} \end{theorem} When $\mathcal{R}$ is $\in $ or $\mathcal{=}$ we will not use the symbol \ast $ to denote their extensions, since their meaning is unaltered in universes constructed over $\mathbb{R}^{\ast }.$ To give an example of how Leibniz Principle can be used to prove facts about internal entities, let us prove that if $K\subseteq \mathbb{R}$ is a compact set and $(f_{\lambda })$ is a net of continuous functions then $f=\underset{\lambda \uparrow \Lambda {\lim }f_{\lambda }$ has a maximum on $K^{\ast }$. For every $\lambda $ let \xi _{\lambda }$ be the maximum value attained by $f_{\lambda }$ on $K$, and let $x_{\lambda }\in K$ be such that $f_{\lambda }(x_{\lambda })=\xi _{\lambda }.$ For every $\lambda ,$ for every $y_{\lambda }\in K$ we have that $f_{\lambda }(y_{\lambda })\leq f_{\lambda }(x_{\lambda }).$ By Leibniz Principle, if we pose \begin{equation*} x=\lim_{\lambda \uparrow \Lambda }x_{\lambda } \end{equation* we have tha \begin{equation*} \forall y\in K\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ }f(y)\leq f(x), \end{equation*} so $\xi =\lim_{\lambda \uparrow \Lambda }\xi _{\lambda }$is the maximum of f $ on $K$ and it is attained on $x.$ \subsection{Ultrafunction theory} Let $\Omega $ be a set in $\mathbb{R}^{N}$ and let $V(\Omega )\ $be a (real or complex) vector space such that $\mathcal{D}(\overline{\Omega })\subseteq V(\Omega )\subseteq L^{2}(\Omega )\cap \mathcal{C}(\overline{\Omega }).$ \begin{definition} Given the function space $V(\Omega )$ we se \begin{equation*} V_{\Lambda}(\Omega ):=\ \underset{\lambda \uparrow \Lambda }{\lim V_{\lambda }(\Omega ), \end{equation* wher \begin{equation*} V_{\lambda }(\Omega )=Span(V(\Omega )\cap \lambda ). \end{equation* $V_{\Lambda}(\Omega )$ will be called the \textbf{space of ultrafunctions} generated by $V(\Omega ).$ \end{definition} Using the above definition, if $V(\Omega )$, $\Omega \subset \mathbb{R}^{N} , is a real function space then we can associate to it three functions spaces of hyperreal functions, namely $V(\Omega )^{\sigma },$ V_{\Lambda}(\Omega )$ and $V(\Omega )^{\ast }$: \begin{equation} V(\Omega )^{\sigma }=\left\{ f^{\ast }\ |\ f\in V(\Omega )\right\} \label{sigma1} \end{equation \begin{equation} V_{\Lambda}(\Omega )=\left\{ \lim_{\lambda \uparrow \Lambda }\ f_{\lambda }\ |\ f_{\lambda }\in V_{\lambda }(\Omega )\right\} \label{tilda} \end{equation} \begin{equation} V(\Omega )^{\ast }=\left\{ \lim_{\lambda \uparrow \Lambda }\ f_{\lambda }\ |\ f_{\lambda }\in V(\Omega )\right\} \label{star} \end{equation Clearly we hav \begin{equation*} V(\Omega )^{\sigma }\subset V_{\Lambda}(\Omega )\subset V(\Omega )^{\ast }. \end{equation*} Let us see the relations of the space of ultrafunctions $V_{\Lambda}(\Omega ) $ with the space of ``standard functions'' $V(\Omega )^{\sigma }$(see \re {sigma1}) and the space of internal functions $V(\Omega )^{\ast }$ (see (\re {star})). Given any vector space of functions $V(\Omega )$, the space of ultrafunction generated by $V(\Omega )$ is a vector space of hyperfinite dimension that includes $V(\Omega )^{\sigma }$, and the ultrafunctions are \Lambda $-limits of functions in $V_{\lambda }$. Hence the ultrafunctions are particular internal functions \begin{equation*} u:\left( \mathbb{R}^{\ast }\right) ^{N}\rightarrow {\mathbb{C}^{\ast }.} \end{equation*} Since $V_{\Lambda}(\Omega )\subset \left[ L^{2}(\mathbb{R})\right] ^{\ast },$ it can be equipped with the following scalar produc \begin{equation*} \left( u,v\right) =\int^{\ast }u(x)\overline{v(x)}\ dx, \end{equation* where $\int^{\ast }$ is the natural extension of the Lebesgue integral considered as a functiona \begin{equation*} \int :L^{1}(\Omega )\rightarrow {\mathbb{C}}. \end{equation* Notice that the Euclidean structure of $V_{\Lambda}(\Omega )$ is the \Lambda $-limit of the Euclidean structure of every $V_{\lambda }$ given by the usual $L^{2}$ scalar product. The norm of an ultrafunction will be given by \begin{equation*} \left\Vert u\right\Vert =\left( \int^{\ast }|u(x)|^{2}\ dx\right) ^{\frac{1} 2}}. \end{equation*} \bigskip \subsection{Morse theory} Let $\mathfrak{M}$ be a finite dimensional Riemannian manifold and let \begin{equation*} J:\mathfrak{M}\rightarrow \mathbb{R} \end{equation* be a functional of class $C^{2}.$ A point $u\in \mathfrak{M,}$ is called critical point of $J$ if $dJ(u)=0.$ A number $c\in \mathbb{R}$ is called critical value of $J$ if there is a critical point $u\in \mathfrak{M}$ such that $J(u)=c$. A critical point is called nondegenerate if $H_{J}(u)$ is non singular, namely i \begin{equation*} \left[ \forall \varphi \in T_{u}\mathfrak{M},\ H_{J}(u)\left[ \psi ,\varphi \right] =0\right] \Rightarrow \psi =0 \end{equation* If $a,b\in \mathbb{R}$, we se \begin{eqnarray*} J^{b} &=&\left\{ u\in \mathfrak{M\ }|\ \ J(u)\leq b\right\} \\ J_{a}^{b} &=&J^{b}\backslash J^{a}=\left\{ u\in \mathfrak{M\ }|\ \ a<J(u)\leq b\right\} \\ K_{a}^{b} &=&\left\{ u\in J_{a}^{b}\mathfrak{\ }|\ \ dJ(u)=0\right\} \end{eqnarray*} The Morse index of a quadratic form $a\left[ \varphi \right] $ is the number of negative eigenvalues of any matrix representation of $a\left[\varphi \right] .$ The Morse index of a critical point $u,$ denoted by $m(u),$ is the Morse index of the Hessian quadratic form $H_{J}(u)\left[ \varphi \right] .$ If $u$ is a nondegenerate critical point, we define the \textbf{polynomia } \textbf{Morse index }of $u$ as follow \begin{equation*} i_{t}(u)=t^{m(u)} \end{equation* We have introduced the notion of polynomial Morse index because this notion allows to define the index of any isolated critical point, even if it is degenerate; the definition is the following \begin{equation*} i_{t}(u)=\sum_{k=0}^{N}\dim \left[ H^{k}(J^{c},J^{c}\backslash \left\{ u\right\} )\right] \ t^{k},\ \ \ c=J(u) \end{equation* where $N$ is the dimension of the manifold $\mathfrak{M,}$ $H^{k}(A,B)$ is the $k$-th Alexander-Spanier cohomology group of the couple $(A,B)$ with real coefficients, we denote by $\dim\left[H^{k}(A,B)\right]$ the dimension of $H^{k}(A,B)$ regarded as real vector space. It is a well known fact of Morse theory that, if $u$ is a nondegenerate critical point, the two definitions of $i_{t}(u)$ agree. We define the Morse polynomial of $J_{a}^{b}$ as follows \begin{equation*} M_{t}(J_{a}^{b})=\sum_{u\in K_{a}^{b}}i_{t}(u) \end{equation* Thus $M(t)$ is a polynomial with coefficients in $\mathbb{N}\cup \left\{ +\infty \right\} $. If all the critical points in $K_{a}^{b}$ are not degenerate, $M(1)$ is the cardinality of $K_{a}^{b}$ namely the number of the critical points of $J$ in $J_{a}^{b}$. If some critical point is degenerate, then $M(1)$ is the number of critical points counted with their multiplicity where the multiplicity of a critical point $u$ is given by i_{1}(u).$ The Betti (or Poincar\'{e}) polynomial of $J_{a}^{b}$ is a topological invariant defined as follows \begin{equation*} P_{t}(J_{a}^{b})=\sum_{k=0}^{N}\dim \left[ H^{k}(J^{b},J^{a})\right] \ t^{k} \end{equation*} $\dim \left[ H^{k}(J^{b},J^{a})\right] $ is called the $k$-th Betti number of $J_{a}^{b}.$ In the rest of the paper, we shall use the following important result in Morse theory. \bigskip \begin{theorem} \label{mara}Let us assume that \begin{itemize} \item $\overline{J_{a}^{b}}$ is compact (or more in general $J$ satisfy (PS) in $\left[ a,b\right] $), \item $K_{a}^{b}$ is a finite set. \end{itemize} Then both $M_{t}(J_{a}^{b})$ and $P_{t}(J_{a}^{b})$ are finite and there exists a polynomial $Q$ with coefficients in $\mathbb{N}$ such tha \begin{equation*} M_{t}(J_{a}^{b})=P_{t}(J_{a}^{b})+(1+t)Q(t) \end{equation*} \end{theorem} \bigskip \section{Morse theory for ultrafunctions\label{mtu}} \subsection{Basic results} Let $V\subset C^{1}(\Omega )$ be a Banach space and let \begin{equation*} J:V\rightarrow \mathbb{R} \end{equation* be a functional of class $C^{2}$. In the applications, we will assume that J $ has the following structure \begin{equation} J\left( u\right) =\int F(x,u,{\Greekmath 0272} u)\ dx \label{J} \end{equation} As we emphasized in the introduction the main difficult for the development of Morse Theory in Banach spaces is to define the right concept of nondegeneracy and of Morse index for a critical point. We will be interested in Morse theory for the functional \begin{equation*} J_{\Lambda }:V_{\Lambda }\rightarrow \mathbb{R}^{\ast } \end{equation* where $V_{\Lambda }$ is a space of ultrafunctions and $J_{\Lambda }$ is the restriction of $J^{\ast }$ to $V_{\Lambda }.$For example, a suitable space for the functional (\ref{J}) is $V_{\Lambda }(\Omega ):=[C^{2}(\Omega )\cap C_{0}^{1}(\overline{\Omega })]_{\Lambda }.$ Now let us describe the main objects of Morse theory in the ultrafunctions framework. \begin{definition} An ultrafunction $u\in V_{\Lambda}$ is called a critical point of J_{\Lambda}:V_{\Lambda}\rightarrow \mathbb{R}^{\ast }$ i \begin{equation*} \forall \varphi \in V_{\Lambda},\ d J_{\Lambda}(u)\left[ \varphi \right] =0 \end{equation* where $dJ$ is the differential of $J.$ \end{definition} In particular, if $J$ is the functional (\ref{J}), we have that $u\in V_{\Lambda }=[C^{2}(\Omega )\cap C_{0}^{1}(\overline{\Omega })]_{\Lambda }$ is a critical point i \begin{equation*} \forall \varphi \in V_{\Lambda }(\Omega ),\ \int \left[ \frac{\partial F} \partial \left( {\Greekmath 0272} u\right) }\cdot {\Greekmath 0272} \varphi +\frac{\partial F} \partial u}\varphi \right] \ dx=0 \end{equation* Here $\frac{\partial F}{\partial \left( {\Greekmath 0272} u\right) }$ denotes the vector $\left( \frac{\partial F}{\partial u_{x_{1}}},....,\frac{\partial F} \partial u_{x_{N}}}\right) .$ The Hessian quadratic form $H_{J^{\ast }}(u)$ of $J^{\ast }$ is defined on V^{\ast }\times V^{\ast };$ we will denote by $H_{J_{\Lambda }}(u)$ its restriction to $V_{\Lambda }\times V_{\Lambda }.$ A critical point of J_{\Lambda }$ is called nondegenerate i \begin{equation*} \forall \varphi \in V_{\Lambda },\ H_{J_{\Lambda }}(u)\left[ \psi ,\varphi \right] =0\Rightarrow \psi =0 \end{equation* Since $H_{J_{\Lambda }}(u)$ is a quadratic form defined on a hyperfinite space $V_{\Lambda },$ its Morse index is well defined and hence also the Morse index $m_{\Lambda }(u)$ of $u$ is well defined. Given two hyperreal numbers $a<b,$ we se \begin{align*} J^{b}_{\Lambda} &=\left\{ u\in V_{\Lambda}\mathfrak{\ }|\ \ J_{\Lambda}(u)\leq b\right\} \\ [J_{a}^{b}]_{\Lambda}&=J_{\Lambda}^{b}\backslash J^{a}_{\Lambda}=\left\{ u\in V_{\Lambda}\mathfrak{\ }|\ \ a<J_{\Lambda}(u)\leq b\right\} \\ [K_{a}^{b}]_{\Lambda} &=\left\{ u\in J_{a}^{b}\mathfrak{\ }|\ \ dJ_{\Lambda}(u)=0\right\} \end{align*} Next we must define the Morse index, the Morse polynomial and the Betti polynomial in the frame of ultrafunctions. We could define them intrinsically as we have done for the above notions. However it seems easier to define them by mean of a $\Lambda $-limit. We se \begin{equation*} M_{t}([J_{a}^{b}]_{\Lambda})=\lim_{\lambda \uparrow \Lambda }\ M_{t}(J_{a_{\lambda }}^{b_{\lambda }}\cap V_{\lambda }) \end{equation* where $a_{\lambda }\ $and $b_{\lambda }$ are two real nets such tha \begin{equation} \lim_{\lambda \uparrow \Lambda }\ a_{\lambda }=a;\ \ \lim_{\lambda \uparrow \Lambda }\ b_{\lambda }=b. \label{lin} \end{equation} Analogously, we define the "generalized" Betti polynomial as follows \begin{equation*} P_{t}([J_{a}^{b}]_{\Lambda})=\lim_{\lambda \uparrow \Lambda }\ P_{t}(J_{a_{\lambda }}^{b_{\lambda }}\cap V_{\lambda }). \end{equation*} Now it is possible to state an abstract theorem for Morse theory in the framework of ultrafunctions: \begin{theorem} \label{cecilia}Let \begin{equation*} J:V\rightarrow \mathbb{R} \end{equation* be a $C^{2}$-functional and \begin{equation*} J_{\Lambda}:V_{\Lambda}\rightarrow \mathbb{R}^{\ast } \end{equation* be the restriction of $J^{\ast }$ to $V_{\Lambda}.$ Let $a,b\in \mathbb{R ^{\ast }$ satisfy (\ref{lin}) and assume that \begin{itemize} \item for almost every $\lambda \in \mathfrak{L},$ $\overline{J_{a_{\lambda }}^{b_{\lambda }}}$ is compact (or more in general $J$ satisfy (PS) in \left[ a_{\lambda },b_{\lambda }\right] $), \item for almost every $\lambda \in \mathfrak{L},\mathfrak{\ }K_{a_{\lambda }}^{b_{\lambda }}\ $is finite . \end{itemize} Then $M_{t}([J_{a}^{b}]_{\Lambda}),P_{t}([J_{a}^{b}]_{\Lambda})\in \mathfrak pol}(\mathbb{N}\mathbf{)}^{\ast }$ wher \begin{equation*} \mathfrak{pol}(\mathbb{N}\mathbf{)=}\left\{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{polynomials\ with coefficients in }\mathbb{N}\right\} \end{equation* and there exists a polynomial $Q\in \mathfrak{pol}(\mathbb{N}\mathbf{) ^{\ast }$ such tha \begin{equation*} M_{t}([J_{a}^{b}]_{\Lambda})=P_{t}([J_{a}^{b}]_{\Lambda})+(1+t)Q(t). \end{equation*} \end{theorem} \textbf{Proof - }For almost every $\lambda \in \mathfrak{L},$ $\overline J_{a_{\lambda }}^{b_{\lambda }}}$ is compact and $K_{a_{\lambda }}^{b_{\lambda }}\ $is finite; then by Th. \ref{mara}, $M_{t}(J_{a_{\lambda }}^{b_{\lambda }})$ and $P_{t}(J_{a_{\lambda }}^{b_{\lambda }})\in \mathfrak pol}(\mathbb{N}\mathbf{)}$ and there exists a polynomial $Q_{\lambda }\in \mathfrak{pol}(\mathbb{N}\mathbf{)}$ such tha \begin{equation*} M_{t}(J_{a_{\lambda }}^{b_{\lambda }})=P_{t}(J_{a_{\lambda }}^{b_{\lambda }})+(1+t)Q_{\lambda }(t) \end{equation* The conclusion follows taking the $\Lambda $-limit. $\square $ \bigskip \subsection{Ultrafunctions versus Sobolev spaces} \bigskip Usually, the critical points of functional of type (\ref{J}) are studied in the Sobolev space $W_{0}^{1,p}(\Omega )$ provided that the functional $J$ can be extended to $W_{0}^{1,p}(\Omega )$ as a $C^{1}$ functional. In this section, we will investigate some relation between the ultrafunction and the Sobolev space approach. So we will assume that $J$ can be extended to a $C^{1}$-functional in a Banach space $W\subset L^{1}(\Omega )$ (with some abuse of notation we will denote this extension by the same letter $J$) \begin{equation*} J:W\rightarrow \mathbb{R}. \end{equation* So, we have tha \begin{equation*} V^{\sigma }\subset W^{\sigma }\subset V_{\Lambda} \end{equation*} In the following, to simplify the notation, we will identify $V^{\sigma }$ and $V$ as well as $W^{\sigma }$ and $W.$ \bigskip The next theorems will establish some relations between the critical points of $J_{\Lambda}$ in $V_{\Lambda}$ and the critical points of $J$ in $W.$ The first result in this direction is (almost) trivial: \begin{theorem} Under the same framework and the same assumptions of Th. \ref{cecilia} every critical point of $J$ in $W$ is a critical point of $J_{\Lambda }$ in V_{\Lambda }$ \end{theorem} \textbf{Proof: }Let $u\in W$ be a critical point of $J$; we will use the fact that $V(\Omega )^{\sigma }\subset V_{\Lambda }(\Omega )$ to prove the thesis. Let $u_{\lambda }$ be the constant net $u_{\lambda }=u$; then \begin{equation*} \lim_{\lambda \uparrow \Lambda }u_{\lambda }=u^{\ast }\in V(\Omega )^{\sigma }\subset V_{\Lambda }(\Omega ), \end{equation* and let $J_{\lambda }$ be the constant net $J_{\lambda }=J$; then \begin{equation*} dJ_{\lambda }(u_{\lambda })[\phi _{\lambda }]=0 \end{equation* for every $\phi _{\lambda }\in V_{\lambda }(\Omega )$; therefore, taking the $\Lambda $-limit of a constant net we have the thesis. $\square $ The above theorem cannot be inverted in the sense that it is false that every critical point of $J_{\Lambda}$ is a critical point of $J\ $in$\ W.$ However, there are conditions which insure the existence of critical point of $J$ in $W$. More precisely the next theorem states that, under suitable condition, ``infinitely close'' to any critical point of $J_{\Lambda}$ there is is a critical point of $J$ This theorem exploit a compactness condition which is a variant of the usual Palais-Smale condition (PS). We recall the Palais-Smale condition is a basic tool for Morse theory in infinite dimensional manifolds (see e.g. \cit {Chang93}). Here it is used only to relate some critical point of J_{\Lambda }$ with the critical points of $J.$ \begin{definition} \label{PSU}\textbf{Palais-Smale condition for ultrafunctions (PSU)} We say that the functiona \begin{equation*} J:W\rightarrow \mathbb{R} \end{equation* satisfies (PSU) in the interval $\left[ a,b\right] \subset \mathbb{R}$ if for every net $\left\{ u_{\lambda }\right\} _{\lambda \in \mathfrak{L}}$ such that that \begin{itemize} \item (A) $\forall \lambda \in \mathfrak{L,\ }J(u_{\lambda })\in \left[ a, \right] $ \item (B) $\forall \lambda \in \mathfrak{L},\ \forall v\in V_{\lambda },\ dJ(u_{\lambda })\left[ v\right] =0$ \end{itemize} \noindent there is a converging subnet $\left\{ u_{\lambda }\right\} _{\lambda \in \mathfrak{D}}$ ($\mathfrak{D}\subset \mathfrak{L}$) in the topology of $W$, such that \begin{equation*} \lim_{\lambda \rightarrow \Lambda }\ u_{\lambda }\in W. \end{equation*} \end{definition} \bigskip \begin{remark} Notice that, by prop. \ref{nino}, the sequence $\left\{ u_{\lambda }\right\} _{\lambda \in \mathfrak{L}}$ itself is converging. \end{remark} \begin{theorem} \label{A}Let us assume that $W$ is a Banach space and that $V\subset W\subset V_{\Lambda }.$ Let \begin{equation*} J:W\rightarrow \mathbb{R} \end{equation* be a $C^{1}$-functional which satisfies (PSU) in the interval $\left[ a, \right] .\ $Then, if $\bar{u}$ is a critical point of \begin{equation*} J_{\Lambda }:V_{\Lambda }\rightarrow \mathbb{R}^{\ast } \end{equation* with $J_{\Lambda }\left( \bar{u}\right) \in \left[ a,b\right] ^{\ast },$ there exists $w\in K_{a}^{b}$ such tha \begin{equation*} \left\Vert \bar{u}-w^{\ast }\right\Vert _{W^{\ast }}\sim 0. \end{equation*} \end{theorem} \begin{remark} Notice that in the above theorem, it is possible that that $\bar{u}=w^{\ast }.$ Obviously, this fact always occur if $W$ is a Hilbert space and all the critical values of $J$ in $\left[ a,b\right] $ are not degenerate. \end{remark} \textbf{Proof of Th. \ref{A}. }Le \begin{equation*} \bar{u}=\lim_{\lambda \uparrow \Lambda }\ u_{\lambda }; \end{equation*} Then, since (PSU) holds, there is a function $w\in W$ and a subnet of u_{\lambda }$ such that \begin{equation*} \left\Vert u_{\lambda }-w\right\Vert _{W}\rightarrow 0. \end{equation* By Proposition \ref{nino}, $\left\Vert u_{\lambda }-w\right\Vert $ is a converging net, and hence, for every $\varepsilon >0,$ exists $Q\in \mathcal U}$ such that $\forall \lambda \in Q,\ $ \begin{equation*} \left\Vert u_{\lambda }-w\right\Vert _{W}\leq \varepsilon . \end{equation* If you take the $\Lambda $-limit of the above inequality, you get tha \begin{equation*} \left\Vert \bar{u}-w^{\ast }\right\Vert _{W^{\ast }}\leq \varepsilon . \end{equation* By the arbitrariety of $\varepsilon $, we conclude that \begin{equation*} \left\Vert \bar{u}-w^{\ast }\right\Vert _{W^{\ast }}\sim 0 \end{equation*} $\square $ \bigskip \section*{Abstract (Not appropriate in this style!)}% \else \small \begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}% \quotation \fi }% }{% }% \@ifundefined{endabstract}{\def\endabstract {\if@twocolumn\else\endquotation\fi}}{}% \@ifundefined{maketitle}{\def\maketitle#1{}}{}% \@ifundefined{affiliation}{\def\affiliation#1{}}{}% \@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}% \@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}% \@ifundefined{newfield}{\def\newfield#1#2{}}{}% \@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }% \newcount\c@chapter}{}% \@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}% \@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}% \@ifundefined{subsection}{\def\subsection#1% {\par(Subsection head:)#1\par }}{}% \@ifundefined{subsubsection}{\def\subsubsection#1% {\par(Subsubsection head:)#1\par }}{}% \@ifundefined{paragraph}{\def\paragraph#1% {\par(Subsubsubsection head:)#1\par }}{}% \@ifundefined{subparagraph}{\def\subparagraph#1% {\par(Subsubsubsubsection head:)#1\par }}{}% \@ifundefined{therefore}{\def\therefore{}}{}% \@ifundefined{backepsilon}{\def\backepsilon{}}{}% \@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}% \@ifundefined{registered}{% \def\registered{\relax\ifmmode{}\r@gistered \else$\m@th\r@gistered$\fi}% \def\r@gistered{^{\ooalign {\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr \mathhexbox20D}}}}{}% \@ifundefined{Eth}{\def\Eth{}}{}% \@ifundefined{eth}{\def\eth{}}{}% \@ifundefined{Thorn}{\def\Thorn{}}{}% \@ifundefined{thorn}{\def\thorn{}}{}% \def\TEXTsymbol#1{\mbox{$#1$}}% \@ifundefined{degree}{\def\degree{{}^{\circ}}}{}% \newdimen\theight \@ifundefined{Column}{\def\Column{% \vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}% \theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip \kern -\theight \vbox to \theight{% \rightline{\rlap{\box\z@}}% \vss }% }% }}{}% \@ifundefined{qed}{\def\qed{% \ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi \hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}% }}{}% \@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}% \@ifundefined{tciLaplace}{\def\tciLaplace{L}}{}% \@ifundefined{tciFourier}{\def\tciFourier{F}}{}% \@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}% \@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}% \@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}% \@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}% \@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}% \@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}% \@ifundefined{vvert}{\def\vvert{\Vert}}{ \@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}% \@ifundefined{dB}{\def\dB{\hbox{{}}}}{ \@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{ \@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{ \@ifundefined{note}{\def\note{$^{\dag}}}{}% \defLaTeX2e{LaTeX2e} \ifx\fmtnameLaTeX2e \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} \DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} \DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} \fi \def\alpha{{\Greekmath 010B}}% \def\beta{{\Greekmath 010C}}% \def\gamma{{\Greekmath 010D}}% \def\delta{{\Greekmath 010E}}% \def\epsilon{{\Greekmath 010F}}% \def\zeta{{\Greekmath 0110}}% \def\eta{{\Greekmath 0111}}% \def\theta{{\Greekmath 0112}}% \def\iota{{\Greekmath 0113}}% \def\kappa{{\Greekmath 0114}}% \def\lambda{{\Greekmath 0115}}% \def\mu{{\Greekmath 0116}}% \def\nu{{\Greekmath 0117}}% \def\xi{{\Greekmath 0118}}% \def\pi{{\Greekmath 0119}}% \def\rho{{\Greekmath 011A}}% \def\sigma{{\Greekmath 011B}}% \def\tau{{\Greekmath 011C}}% \def\upsilon{{\Greekmath 011D}}% \def\phi{{\Greekmath 011E}}% \def\chi{{\Greekmath 011F}}% \def\psi{{\Greekmath 0120}}% \def\omega{{\Greekmath 0121}}% \def\varepsilon{{\Greekmath 0122}}% \def\vartheta{{\Greekmath 0123}}% \def\varpi{{\Greekmath 0124}}% \def\varrho{{\Greekmath 0125}}% \def\varsigma{{\Greekmath 0126}}% \def\varphi{{\Greekmath 0127}}% \def{\Greekmath 0272}{{\Greekmath 0272}} \def\FindBoldGroup{% {\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}% } \def\Greekmath#1#2#3#4{% \if@compatibility \ifnum\mathgroup=\symbold \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \else \FindBoldGroup \ifnum\mathgroup=\theboldgroup \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}% {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}% \else \mathchar"#1#2#3# \fi \fi} \newif\ifGreekBold \GreekBoldfalse \let\SAVEPBF=\pbf \def\pbf{\GreekBoldtrue\SAVEPBF}% \@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{} \@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{} \@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{} \@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{} \@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{} \@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{} \@ifundefined{remark}{\newtheorem{remark}{Remark}}{} \@ifundefined{example}{\newtheorem{example}{Example}}{} \@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{} \@ifundefined{definition}{\newtheorem{definition}{Definition}}{} \@ifundefined{mathletters}{% \newcounter{equationnumber} \def\mathletters{% \addtocounter{equation}{1} \edef\@currentlabel{\arabic{equation}}% \setcounter{equationnumber}{\c@equation} \setcounter{equation}{0}% \edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}% } \def\endmathletters{% \setcounter{equation}{\value{equationnumber}}% } }{} \@ifundefined{BibTeX}{% \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}% \@ifundefined{AmS}% {\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}% A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}% \@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}% \def\@@eqncr{\let\@tempa\relax \ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}% \else \def\@tempa{&}\fi \@tempa \if@eqnsw \iftag@ \@taggnum \else \@eqnnum\stepcounter{equation}% \fi \fi \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@eqnswtrue \global\@eqcnt\z@\cr} \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} \def\QATOP#1#2{{#1 \atop #2}}% \def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}% \def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}% \def\QABOVE#1#2#3{{#2 \above#1 #3}}% \def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}% \def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}% \def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}% \def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}% \def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}% \def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}% \def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}% \def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}% \def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}% \def\QTABOVED#1#2#3#4#5{{\textstyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\QDABOVED#1#2#3#4#5{{\displaystyle {#4 \abovewithdelims#1#2#3 #5}}}% \def\tint{\mathop{\textstyle \int}}% \def\tiint{\mathop{\textstyle \iint }}% \def\tiiint{\mathop{\textstyle \iiint }}% \def\tiiiint{\mathop{\textstyle \iiiint }}% \def\tidotsint{\mathop{\textstyle \idotsint }}% \def\toint{\mathop{\textstyle \oint}}% \def\tsum{\mathop{\textstyle \sum }}% \def\tprod{\mathop{\textstyle \prod }}% \def\tbigcap{\mathop{\textstyle \bigcap }}% \def\tbigwedge{\mathop{\textstyle \bigwedge }}% \def\tbigoplus{\mathop{\textstyle \bigoplus }}% \def\tbigodot{\mathop{\textstyle \bigodot }}% \def\tbigsqcup{\mathop{\textstyle \bigsqcup }}% \def\tcoprod{\mathop{\textstyle \coprod }}% \def\tbigcup{\mathop{\textstyle \bigcup }}% \def\tbigvee{\mathop{\textstyle \bigvee }}% \def\tbigotimes{\mathop{\textstyle \bigotimes }}% \def\tbiguplus{\mathop{\textstyle \biguplus }}% \def\dint{\mathop{\displaystyle \int}}% \def\diint{\mathop{\displaystyle \iint}}% \def\diiint{\mathop{\displaystyle \iiint}}% \def\diiiint{\mathop{\displaystyle \iiiint }}% \def\didotsint{\mathop{\displaystyle \idotsint }}% \def\doint{\mathop{\displaystyle \oint}}% \def\dsum{\mathop{\displaystyle \sum }}% \def\dprod{\mathop{\displaystyle \prod }}% \def\dbigcap{\mathop{\displaystyle \bigcap }}% \def\dbigwedge{\mathop{\displaystyle \bigwedge }}% \def\dbigoplus{\mathop{\displaystyle \bigoplus }}% \def\dbigodot{\mathop{\displaystyle \bigodot }}% \def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}% \def\dcoprod{\mathop{\displaystyle \coprod }}% \def\dbigcup{\mathop{\displaystyle \bigcup }}% \def\dbigvee{\mathop{\displaystyle \bigvee }}% \def\dbigotimes{\mathop{\displaystyle \bigotimes }}% \def\dbiguplus{\mathop{\displaystyle \biguplus }}% \if@compatibility\else \RequirePackage{amsmath} \makeatother \endinput \fi \typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE} \def\makeatother\endinput{\makeatother\endinput} \bgroup \ifx\ds@amstex\relax \message{amstex already loaded}\aftergroup\makeatother\endinput \else \@ifpackageloaded{amsmath}% {\message{amsmath already loaded}\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amstex}% {\message{amstex already loaded}\aftergroup\makeatother\endinput} {} \@ifpackageloaded{amsgen}% {\message{amsgen already loaded}\aftergroup\makeatother\endinput} {} \fi \egroup \let\DOTSI\relax \def\RIfM@{\relax\ifmmode}% \def\FN@{\futurelet\next}% \newcount\intno@ \def\iint{\DOTSI\intno@\tw@\FN@\ints@}% \def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}% \def\iiiint{\DOTSI\intno@4 \FN@\ints@}% \def\idotsint{\DOTSI\intno@\z@\FN@\ints@}% \def\ints@{\findlimits@\ints@@}% \newif\iflimtoken@ \newif\iflimits@ \def\findlimits@{\limtoken@true\ifx\next\limits\limits@true \else\ifx\next\nolimits\limits@false\else \limtoken@false\ifx\ilimits@\nolimits\limits@false\else \ifinner\limits@false\else\limits@true\fi\fi\fi\fi}% \def\multint@{\int\ifnum\intno@=\z@\intdots@ \else\intkern@\fi \ifnum\intno@>\tw@\int\intkern@\fi \ifnum\intno@>\thr@@\int\intkern@\fi \int \def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi \ifnum\intno@>\tw@\intop\intkern@\fi \ifnum\intno@>\thr@@\intop\intkern@\fi\intop}% \def\intic@{% \mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}% \def\negintic@{\mathchoice {\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}% \def\ints@@{\iflimtoken@ \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits \else\multint@\nolimits\fi \eat@ \else \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits\else \multint@\nolimits\fi}\fi\ints@@@}% \def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}% \def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}% \def\intdots@{\mathchoice{\plaincdots@}% {{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}% {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}% \def\RIfM@{\relax\protect\ifmmode} \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi} \let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi \def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice {\textdef@\displaystyle\f@size{#1}}% {\textdef@\textstyle\tf@size{\firstchoice@false #1}}% {\textdef@\textstyle\sf@size{\firstchoice@false #1}}% {\textdef@\textstyle \ssf@size{\firstchoice@false #1}}% \glb@settings} \def\textdef@#1#2#3{\hbox{{% \everymath{#1}% \let\f@size#2\selectfont #3}}} \newif\iffirstchoice@ \firstchoice@true \def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}% \def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}% \def\multilimits@{\bgroup\vspace@\Let@ \baselineskip\fontdimen10 \scriptfont\tw@ \advance\baselineskip\fontdimen12 \scriptfont\tw@ \lineskip\thr@@\fontdimen8 \scriptfont\thr@@ \lineskiplimit\lineskip \vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}% \def\Sb{_\multilimits@}% \def\endSb{\crcr\egroup\egroup\egroup}% \def\Sp{^\multilimits@}% \let\endSp\endSb \newdimen\ex@ \ex@.2326ex \def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}% \def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow \mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$}% \def\overrightarrow{\mathpalette\overrightarrow@}% \def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \let\overarrow\overrightarrow \def\overleftarrow{\mathpalette\overleftarrow@}% \def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\overleftrightarrow{\mathpalette\overleftrightarrow@}% \def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr \leftrightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}% \def\underrightarrow{\mathpalette\underrightarrow@}% \def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}% \let\underarrow\underrightarrow \def\underleftarrow{\mathpalette\underleftarrow@}% \def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}% \def\underleftrightarrow{\mathpalette\underleftrightarrow@}% \def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th \hfil#1#2\hfil$\crcr \noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}% \def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@} \let\nlimits@\displaylimits \def\setboxz@h{\setbox\z@\hbox} \def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr \hfil$#1\m@th\operator@font lim$\hfil\crcr \noalign{\nointerlineskip}#2#1\crcr \noalign{\nointerlineskip\kern-\ex@}\crcr}}}} \def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\copy\z@\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill \mkern-6mu\box\z@$} \def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}} \def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}} \def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@} \def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@} \def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}} \def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@ \hbox{$#1\m@th\operator@font lim$}}}} \def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}} \def\mathpalette\varlimsup@{}@#1{\mathop{\overline {\hbox{$#1\m@th\operator@font lim$}}}} \def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}% \begingroup \catcode `|=0 \catcode `[= 1 \catcode`]=2 \catcode `\{=12 \catcode `\}=12 \catcode`\\=12 |gdef|@alignverbatim#1\end{align}[#1|end[align]] |gdef|@salignverbatim#1\end{align*}[#1|end[align*]] |gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]] |gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]] |gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]] |gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]] |gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]] |gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]] |gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]] |gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]] |gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]] |endgroup \def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim You are using the "align" environment in a style in which it is not defined.} \let\endalign=\endtrivlist \@namedef{align*}{\@verbatim\@salignverbatim You are using the "align*" environment in a style in which it is not defined.} \expandafter\let\csname endalign*\endcsname =\endtrivlist \def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim You are using the "alignat" environment in a style in which it is not defined.} \let\endalignat=\endtrivlist \@namedef{alignat*}{\@verbatim\@salignatverbatim You are using the "alignat*" environment in a style in which it is not defined.} \expandafter\let\csname endalignat*\endcsname =\endtrivlist \def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim You are using the "xalignat" environment in a style in which it is not defined.} \let\endxalignat=\endtrivlist \@namedef{xalignat*}{\@verbatim\@sxalignatverbatim You are using the "xalignat*" environment in a style in which it is not defined.} \expandafter\let\csname endxalignat*\endcsname =\endtrivlist \def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim You are using the "gather" environment in a style in which it is not defined.} \let\endgather=\endtrivlist \@namedef{gather*}{\@verbatim\@sgatherverbatim You are using the "gather*" environment in a style in which it is not defined.} \expandafter\let\csname endgather*\endcsname =\endtrivlist \def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim You are using the "multiline" environment in a style in which it is not defined.} \let\endmultiline=\endtrivlist \@namedef{multiline*}{\@verbatim\@smultilineverbatim You are using the "multiline*" environment in a style in which it is not defined.} \expandafter\let\csname endmultiline*\endcsname =\endtrivlist \def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim You are using a type of "array" construct that is only allowed in AmS-LaTeX.} \let\endarrax=\endtrivlist \def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.} \let\endtabulax=\endtrivlist \@namedef{arrax*}{\@verbatim\@sarraxverbatim You are using a type of "array*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endarrax*\endcsname =\endtrivlist \@namedef{tabulax*}{\@verbatim\@stabulaxverbatim You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endtabulax*\endcsname =\endtrivlist \def\endequation{% \ifmmode\ifinner \iftag@ \addtocounter{equation}{-1} $\hfil \displaywidth\linewidth\@taggnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \else $\hfil \displaywidth\linewidth\@eqnnum\egroup \endtrivlist \global\@ifnextchar*{\@tagstar}{\@tag}@false \global\@ignoretrue \fi \else \iftag@ \addtocounter{equation}{-1} \eqno \hbox{\@taggnum} \global\@ifnextchar*{\@tagstar}{\@tag}@false% $$\global\@ignoretrue \else \eqno \hbox{\@eqnnum $$\global\@ignoretrue \fi \fi\fi } \newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false \def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} \@ifundefined{tag}{ \def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}} \def\@tag#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{(#1)}} \def\@tagstar*#1{% \global\@ifnextchar*{\@tagstar}{\@tag}@true \global\def\@taggnum{#1}} }{} \def\tfrac#1#2{{\textstyle {#1 \over #2}}}% \def\dfrac#1#2{{\displaystyle {#1 \over #2}}}% \def\binom#1#2{{#1 \choose #2}}% \def\tbinom#1#2{{\textstyle {#1 \choose #2}}}% \def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}% \makeatother \endinput
{ "timestamp": "2015-04-15T02:00:56", "yymm": "1504", "arxiv_id": "1504.03346", "language": "en", "url": "https://arxiv.org/abs/1504.03346" }
\section{Introduction} \label{sec:intro} Being relevant to a wide range of practical scenarios, the behavior of colloid suspensions near solid surfaces has been thoroughly studied over the years. This research effort consists of several bodies of work, for each of which we can give only a few representative references. The first category of papers concerns the disruption of the structural isotropy of a three-dimensional (3D) fluid suspension by the surface, {e.g., } the formation of a layered structure decaying away from the surface under equilibrium \cite{VanWinkle1988,GonzalezMozuelos1991} and nonequilibrium \cite{Zurita_Gotor-Blawzdziewicz-Wajnryb:2012} conditions. Another category addresses the effect of the anisotropic geometry on particle dynamics near a single planar surface\,---\,for isolated particles \cite{Happel,Perkins1992,% Cichocki-Jones:1998,% Walz-Suresh:1995,% Prieve1999,% Prieve2000,CarbajalTinoco2007,% Blawzdziewicz2010% }, particle pairs \cite{Happel,Dufresne2000,Cichocki2007,% Zurita_Gotor-Blawzdziewicz-Wajnryb:2007b}, and a 3D suspension adjacent to a surface \cite{Anekal2006,Michailidou2009,% Cichocki-Wajnryb-Blawzdziewicz-Dhont-Lang:2010,% Dhont2012,% Michailidou2013}. Regarding quasi-two-dimensional (quasi-2D) layers of particles, most studies have considered the confinement of suspensions between two rigid surfaces. This research addressed structural properties of such confined suspensions \cite{CarbajalTinoco1996,Schmidt1997,Zangi2000,Frydel2003,Han2008}, and the dynamics of single particles \cite{Lin2000,Dufresne2001,Ekiel_Jezewska-Wajnryb-Blawzdziewicz-Feuillebois:2008}, particle pairs \cite{Cui2004,Bhattacharya2005b}, and concentrated quasi-2D suspensions \cite{Diamant2005A,% Blawzdziewicz-Wajnryb:2012,% Baron-Blawzdziewicz-Wajnryb:2008}. Another type of quasi-2D suspensions has also been studied, where a particle layer is confined to a fluid interface \cite{Lin1995,Cichocki2004,Peng2009,Zhang2014}. In cases where the surface attracts the particles and the suspension is sufficiently dilute, the system can contain a single layer of surface-associated particles in contact with a practically particle-free solvent \cite{GonzalezMozuelos1991}. A single layer can also form as a result of gravitational settling of particles toward a horizontal wall. This scenario is studied in the present work. Sedimented colloidal particles undergo random Brownian displacements, which results in diffusive broadening of the fluctuating particle layer. The width of the particle height distribution above the bottom surface is characterized by the sedimentation length $l$, i.e., the height at which the gravitational energy of a particle equals its thermal energy. The dynamics and height distribution of individual sedimented particles above the bottom surface were studied in Refs.\ \citenum{Walz-Suresh:1995,Prieve1999,Prieve2000} using total internal reflection microscopy. Particle monolayers at higher densities were investigated experimentally for a system in which the sedimentation length is much smaller than the particle diameter \cite{Skinner2010}. It was shown that at high area fractions the suspension can assemble into quasi-2D colloidal crystals, but formation of a nonuniform vertical microstructure was not observed, because of the small sedimentation length. Here we are interested in the structure and dynamics of a surface-associated layer for which the sedimentation length is comparable to the particle diameter. We focus on the effects of the suspension concentration on the statistical height distribution of particles and their diffusion coefficient. Unlike the quasi-2D suspensions confined between two surfaces or adsorbed at a fluid interface (which restricts particle configurations and motions in two directions) in the present system no constraints are imposed on the distance between the particles and the single wall. Thus, at sufficiently high area fractions, particles form a nontrivial stratified microstructure. This microstructure and its effect on particle dynamics are analyzed in our paper. The article is organized as follows. Section~\ref{sec:exp_met} describes the experimental methods used to prepare the system, image the particles, and analyze the extracted data. In Sec.\ \ref{sec:num_met} we describe the theoretical background and numerical methods used to perform the simulations. In Sec.\ \ref{sec:structure} we present the results concerning the equilibrium structure of the quasi-2D suspension observed in planes parallel to the bottom surface (the quasi-2D radial distribution) and in the direction perpendicular to it (the height distribution). Section~\ref{sec:dynamics} addresses the diffusion of particles parallel to the surface, as affected by the surface proximity. We discuss our findings in Sec.\ \ref{sec:discuss}. \section{Experimental methods} \label{sec:exp_met} \subsection{Quasi-2D system of sedimented Brownian spheres} \label{Quasi-2D system of sedimented Brownian spheres} Quasi-2D colloidal layers are created by placing a suspension of colloidal silica spheres in a glass sample cell $\sim 150\,\mu$m high. The particles are then allowed to sediment and equilibrate for 30 minutes at a temperature of approximately 24$^\text{o}$C before measurements start (Fig.~\ref{fig:setup}). We use green fluorescent monodisperse, negatively charged silica particles (Kisker Biotech, PSI-G1.5 Lot \#GK0090642T) with diameter $d=1.50\pm0.15\,\mu$m, and mass density $\massDensity=2.0$ g/cm$^3$. Monolayers of area fraction $0<\phi\le0.62$ are prepared by diluting the original suspension with double distilled water (DDW, 18~M$\Omega$), without and with the addition of salt at a concentration $\KCl=0.01\Mol$. The sample walls are cleaned and slightly charged by plasma etching to avoid particle attachment to the bottom wall of the cell. We observe that the aqueous medium above the colloidal monolayer is free of colloids. Since the particles are floating right above the bottom wall, we can treat the upper wall as a distant boundary. \begin{figure} \centering \includegraphics[clip=true,trim=0 0 0 0,scale=0.4]{setup1} \caption{(a) Images of fluorescent $1.5~\mu$m-diameter silica spheres suspended in water, taken after the particles sedimented to create a quasi-2D suspension at area fraction $\phi=0.49$. Large (small) image corresponds to a typical image of the first (second) layer. Scale bar = $5~\mu$m. (b) Schematic view of the system and its parameters. } \label{fig:setup} \end{figure} \subsection{Imaging techniques} Particle position and motion in the $x$--$y$ plane, perpendicular to the optical axis, are observed using epifluorescence microscopy (Olympus IX71). Images are captured at a rate of $70$~fps by a CMOS camera (Gazelle, Point Grey Research). We use in-line holographic microscopy to image the dynamics of particles in three dimensions in dilute samples \cite{Kapfenberger2013}. This imaging technique uses a collimated coherent light source (DPSS, Coherent, $\lambda=532$nm) to illuminate a sample mounted on a microscope. The light scattered from the sample interferes with the light passing through it, to form a hologram in the image plane. We reconstruct the light field passing through the sample by Rayleigh-Sommerfeld back-propagation and extract from it the particle location in three dimensions \cite{Lee07a,Cheong2010b,Kapfenberger2013}. For holographic imaging measurements we use non-fluorescent silica particles with the same diameter ($d=1.50~\pm~0.08~\mu$m, Polysciences Inc.). Additional details of the setup and measurement methods can be found elsewhere \cite{Kapfenberger2013}. \begin{figure} \centering \includegraphics[clip=true,scale=0.5]{figure2ja} \caption{Holographic imaging. Height probability distribution of a single sphere (in salt-free water) shifted to the maximum value and fitted to the Boltzmann probability distribution \eqref{low-density probability distribution} with the particle--wall potential (\ref{Eq:boltzmann}).} \label{fig:boltzman} \end{figure} We use confocal imaging to monitor particle positions in a dense layer in three dimensions. Our spinning disc confocal imaging system (Andor, Revolution XD) includes a Yokogawa (CSU-X1) spinning disc, and an Andor (iXon 897) EM-CCD camera. An objective lens (Olympus, x60, NA=1.1, water immersion) mounted on a piezoelectric scanner (Physik Instrumente, Pifoc P-721.LLQ) is used to scan the sample in the $z$ axis, with a step size of 100 nm. \subsection{Height calibration} \label{Height calibration} A suspended tracer particle is subject to electrostatic and gravitational forces in addition to thermal fluctuations, affecting its height distribution \cite{Prieve1999}. The particle potential energy can be described as \begin{equation} U~= ~mgz+B\mathrm{e}^{-(z-d/2)/\lambda}, \label{Eq:boltzmann} \end{equation} where $z$ is the vertical position of the tracer, $g$ is the gravitational acceleration, \begin{equation} \label{m} m=\textstyle\frac{\pi}{6}\Delta\massDensity d^3 \end{equation} is the buoyant mass of the tracer ($\Delta \massDensity$ is the mass density difference between silica and water), $\lambda$ is the Debye screening length, and the amplitude $B$ depends on $\lambda$ and the surface charges of both particle and glass surfaces. The corresponding probability distribution of the particle height $z$ is \begin{equation} \label{low-density probability distribution} \particleDistribution(z)=\particleDistributionNorm\mathrm{e}^{-U(z)/k_{\rm B}T}, \end{equation} where $k_{\rm B}T$ is the thermal energy and $\particleDistributionNorm$ is the normalization constant. The height distribution of a single particle above the sample's bottom was obtained from very dilute suspensions, using in-line holographic imaging \cite{Lee07a,Cheong2010b,Kapfenberger2013} (see Fig.~\ref{fig:boltzman}). Our holographic measurements provide values of relative particle positions, but not the absolute particle heights with respect to the bottom wall. We thus set the peak position to $z=0$ and focus on the height relative to this reference plane. The exponential decay on the right side of the probability-density peak is governed by a decay length, \begin{equation} l=\frac{k_BT}{mg}\label{ldef} \end{equation} (the sedimentation length), resulting from the competition between gravity and thermal forces. The exponential-decay length determined from the holographic measurements agrees well with the calculated sedimentation length \eqref{ldef}, without any fitting parameters (see Fig.~\ref{fig:boltzman}). The electrostatic term of the probability-density, which controls the steep rise of the probability, affects mostly the peak position rather than its shape. Since we shifted the peak position to $z=0$, the fitting of the entire probability-density using Eqs.\ \eqref{Eq:boltzmann} and \eqref{low-density probability distribution} was insensitive to the value of $B$. Reasonable fits were obtained for $\lambda$ in the range of $\lambda \sim 40-70$ nm. Better estimations of $B$ and $\lambda$ are given in Sec.~\ref{Mean particle height at low area fractions}, using mobility measurements. \begin{figure} \centering \includegraphics[clip=true,scale=0.5]{figure3ja} \caption{(a) Confocal imaging. Height probability distribution of silica particles at $\phi<0.003$ with diameters of $d_0=1.5~\mu$m (blue) and $d_0=1.0~\mu$m (black). Inset: Logarithm of the probability distributions scaled by $d_0^3$ in units of $10^2~\mu$m$^{-3}$; as expected, the two curves have approximately the same slope, which is used to calibrate the confocal height measurements. (b) Height probability distribution of silica particles with diameter $d_0=1.5~\mu$m extracted from holographic imaging (green circles, see Fig.~\ref{fig:boltzman}) and confocal imaging (blue line). The suspensions in both figures were with no added salt, $\KCl=0 \Mol$. } \label{fig:pvsz} \end{figure} The applicability of the holographic imaging is limited to low-density suspensions, whereas the confocal imaging can be also used at higher concentrations. On the other hand, confocal height measurements suffer from spherical aberrations due to multiple changes in refractive index in the imaging path. This leads to a systematic error in measuring $z$, which can be eliminated by proper calibration. We calibrate the confocal measurement of the relative vertical particle positions by requiring the exponential decay of the height distribution to agree with the known, and verified, value of $l$. In Fig.~\ref{fig:pvsz}\subfig{a} we show the particle-height distribution $\particleDistribution(z)$ at $\phi<0.003$ for two different particle sizes ($d_0=1.0,1.5~\mu$m). The distributions are shifted so that the highest probability is located at $z=0$. Scaling the logarithm of the distributions by $d_0^3$ [inset of Fig.~\ref{fig:pvsz}\subfig{a}] shows that the normalized decay constants for the two particle sizes have approximately the same value, from which we calibrate the confocal microscope's height measurements. In Fig.~\ref{fig:pvsz}\subfig{b}, the height distributions extracted by the two methods (holographic and confocal imaging) are overlaid. This figure emphasizes the higher accuracy of holographic imaging over confocal imaging, especially around $z=0$, where the increase in distribution should be very steep \cite{Prieve1999, Kapfenberger2013}. The difference between the curves can also be attributed to polydispersity, since the holographic imaging is a single-particle measurement while the confocal imaging is a multiple-particle measurement, and its corresponding curve represents an average over $\sim$40 particles. \section{Numerical methods} \label{sec:num_met} \subsection{The system} \label{The system - numerical} \subsubsection{Particles and their interactions} \label{Particles and their interactions} Silica particles are modeled as Brownian hard spheres with or without electrostatic repulsion (depending on the salt concentration), immersed in a fluid of viscosity $\eta$. The bottom wall is treated as an infinite hard planar surface. Creeping-flow conditions and no slip boundary conditions at the particle surfaces and at the wall are assumed. In a salt solution with $\KCl=0.01\Mol$, the Debye length is only about 5 nm, and therefore electrostatic interactions are screened out. The particles thus interact only via infinite hard-core particle--particle and particle--wall potentials and the gravity potential $mgz$, and no other potential forces are involved. The strength of the gravity force is described by the sedimentation length \eqref{ldef}. In addition to the hard-core repulsion, in DDW with no added salt ($\KCl=0\Mol$) particles are assumed to also interact via particle--wall and particle--particle Debye--H\"uckel potentials, \begin{equation} V(z)=Be^{-(z-d/2)/\lambda},\label{V} \end{equation} and \begin{equation} V'(r)=B'e^{-(r-d)/\lambda},\label{V'} \end{equation} where $\lambda$ is the Debye screening length, $B$ and $B'$ are the potential amplitudes, and $r$ is the distance between the particle centers. The consideration of Debye--H\"uckel potentials in the salt-free case is based on our experimental measurement $\lambda \sim 60$ nm. A finite Debye screening length in DDW stems from the presence of residual ions in the solution \cite{Behrens2001}. \subsubsection{Suspension polydispersity} \label{Suspension polydispersity} To determine the effects of the suspension polydispersity on the near-wall microstructure and dynamics, we have performed numerical simulations for a hard-sphere (HS) system with a Gaussian distribution of particle diameters, \begin{equation} \label{Particle size distribution} p(d)= \frac{1}{(2\pi\sigma^2)^{1/2}}\exp\left[-\frac{(d-\daver)^2}{2\sigma^2}\right], \end{equation} where $d$ and $\daver$ are the actual and average particle diameters, and $\sigma$ is the standard deviation. All the particles have the same mass density $\massDensity$; hence, particles of different sizes have different buoyant masses and different sedimentation lengths \eqref{ldef}. The dimensionless sedimentation length based on the average particle diameter $d_0$, is defined as \begin{equation} \label{dimensionless sedimentation constant} \frac{l_0}{\daver}=\frac{k_BT}{\maver g\daver}, \end{equation} where \begin{equation} \maver=\frac{\pi}{6} \daver^3\,\Delta\massDensity. \end{equation} The area fraction $\phi$ based on the average particle diameter $\daver$ is \begin{equation} \phi=\textstyle\frac{1}{4}\pi n\daver^2,\label{phi} \end{equation} where $n$ is the number of particles per unit area. Since the particles are free to move in the $z$ direction, the area fraction $\phi$ can exceed 1. \subsubsection{System parameters} The simulations were carried out for the following system parameters: For the dimensionless sedimentation length \eqref{dimensionless sedimentation constant} we use the value \begin{equation} \label{value of dimensionless sedimentation constant} \frac{l_0}{\daver}=0.158, \end{equation} calculated from the particle size and density. Based on the comparison between the calculated and measured values of the equilibrium average of the lateral self-diffusion coefficient for isolated particles in DDW, we estimate that the Debye length and the amplitude of particle--wall electrostatic repulsion are \begin{equation} \label{Debye parameters} \lambda/d=0.03,\qquad \frac{B}{k_{\rm B}T}=10. \end{equation} These values are used for salt-free suspensions at all suspension concentrations. Assuming that the charge densities of the particle and wall surfaces are similar, we take \begin{equation} \label{Debye B particle} B'=B/2, \end{equation} for the interparticle-potential amplitude, as follows from the Derjaguin approximation \cite{Israelachvili}. The simulations were performed in the range of area fractions $\phi\le1.2$. For polydisperse HS systems the calculations were carried out for $\sigma/d_0=0.10$, 0.15, 0.20, and 0.25 (we estimate that $0.10 < \sigma/d_0 < 0.15$ for the silica particles used in the experiments). For particles interacting via the Debye--H\"uckel potentials \eqref{V} and \eqref{V'} only monodisperse suspensions were considered. \subsection{Evaluation of the equilibrium distribution} \label{Equilibrium distributions} \subsubsection{Low density limit} \label{Structure - low density limit} For monodisperse suspensions at low particle concentrations, the equilibrium particle distribution $\particleDistribution(z)$ is given by the normalized Boltzmann factor \eqref{low-density probability distribution}. To determine the particle distribution for a dilute polydisperse suspension, the particle-size-dependent Boltzmann factor for individual particles, $\particleDistribution_1(z;d)$, is convoluted with the particle-size distribution \eqref{Particle size distribution}, \begin{equation} \label{dilute polydisperse equilibrium distribution} \particleDistribution(z)=\int_0^z \textrm{d}d\, p(d)\particleDistribution_1(z;d), \end{equation} For a HS system \begin{equation} \label{individual Boltzmann factor for hard spheres} \particleDistribution_1(z;d)=l^{-1} \textrm{e}^{-(z-d/2)/l} \theta(z-d/2), \end{equation} according to equations \eqref{Eq:boltzmann}--\eqref{ldef}, where $\theta(x)$ is the Heaviside step function, and the sedimentation length $l$ is particle-size dependent due to the variation of particle mass. \subsubsection{Monte--Carlo simulations} \label{Monte-Carlo simulations} To determine the equilibrium microstructure of a sedimented suspension at finite particle area fractions, equilibrium Monte--Carlo (MC) simulations were performed for 2D-periodic arrays of spherical particles in 3D space (with periodicity in the horizontal directions $x$ and $y$ and the box size $L$). The particles interact via infinite hard-core repulsion and the pair-additive potential \begin{equation} U(\boldsymbol{X)}=\sum_{i=1}^{N}m_{i}gz_{i}+ \sum_{i=1}^{N}V(z_{i})+\frac{1}{2}\sum_{i=1}^{N}\sum_{j\ne i}^{N}V^{\prime }(r_{ij}), \end{equation} which includes the gravity term and particle--wall and particle--particle screened electrostatic potentials \eqref{V} and \eqref{V'}. Here $\boldsymbol{X}=(\mathbf{r}_{1},\ldots ,\mathbf{r}_{N})$ is the particle configuration (with $\mathbf{r}_i$ denoting the position of particle $i$), $z_i$ is the vertical coordinate of particle $i$, and $r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|$ is the relative particle distance. A purely HS system with $V=V'=0$ was modeled for monodisperse particles and for polydisperse particles with the Gaussian size distribution \eqref{Particle size distribution}. For systems with nonzero electrostatic repulsion only monodisperse particles were considered. The initial configuration was prepared by placing $N=400$ particles randomly in a vertical cuboid box with the square base $L$ and the height $10L$. The size $L$ of the 2D-periodic cell was determined to obtain the required area fraction $\phi$ of the sedimented particle layer. The suspension was allowed to sediment by following the MC random-walk dynamics in the configurational space $\boldsymbol{X}$ \cite{Frenkel-Smit:2002} (as described below). After the equilibrium state was reached, suspension properties were obtained by averaging the quantities of interest over at least 200 independent configurations. Our adaptive simulation procedure was performed by repeating the MC steps defined as follows: \begin{itemize} \item[\subfig{a}] A randomly selected particle $i$ is given a small random displacement, $\boldsymbol{r} _{i}\rightarrow \boldsymbol{r}_{i}^{\prime }=\boldsymbol{r}_{i}+\boldsymbol{ \Delta }$, where $\boldsymbol{\Delta }$ is chosen from a 3D Gaussian distribution with the standard deviation adaptively adjusted to the current mean gap between particles. This displacement results in the change of the configuration from $\boldsymbol{X}$ to $\boldsymbol{X}^{\prime }$. \item[\subfig{b}] According to the Metropolis detailed balance condition, the new configuration is accepted with the probability \begin{equation} \min \left( 1,\exp \left\{ -\left[ U(\boldsymbol{X}^{\prime }) -U(\boldsymbol{X})\right] /k_{B}T\right\} \right), \end{equation} provided that there is no particle--particle or particle--wall overlap. \end{itemize} To let the system reach an equilibrium state $\boldsymbol{X}_{1}$, the MC step \subfig{a} and \subfig{b} is repeated $10^{5}N$ times. The next independent equilibrium configuration $\boldsymbol{X}_{n+1}$ is obtained from the previous configuration $\boldsymbol{X}_{n}$ by performing $10^{4}N$ MC steps. The particle height distribution $\particleDistribution(z)$ and other equilibrium quantities are obtained by averaging over 200 independent configurations $\boldsymbol{X}_{i}$. \subsection{Hydrodynamics and self-diffusion} \label{Hydrodynamics and self-diffusion} \subsubsection{Low density limit} In the absence of a wall, the self-diffusion coefficient of an isolated solid sphere with diameter $d_0$ is given by the Stokes--Einstein expression \begin{equation} D_0=\frac{k_{\rm B}T}{3\pi \eta d_0}. \end{equation} The self-diffusion coefficient $D(z)$ of a sphere with diameter $d$ at a distance $z$ from the wall is smaller by a factor \begin{equation} \label{self diffusivity for single sphere in wall presence} \frac{D(z)}{D_0}=\frac{d_0}{d}\,\mu_{\parallel}(z/d), \end{equation} where the normalized mobility coefficient $\mu_{\parallel}$ depends on the dimensionless particle position $z/d$ and no other parameters. Relation \eqref{self diffusivity for single sphere in wall presence} refers to the lateral component of the self-diffusion coefficient (parallel to the wall), which was measured in our experiments. However, an analogous expression also holds for the normal component. For monodisperse particles in the dilute-suspension limit, the effective self-diffusion coefficient $\Dself$ averaged across the suspension layer is obtained by integrating \eqref{self diffusivity for single sphere in wall presence} with the Boltzmann distribution \eqref{low-density probability distribution}, \begin{equation} \label{monodisperse effective self-diffusivity} \frac{\Dself}{D_0}=\int_{d/2}^\infty\textrm{d}z\, \particleDistribution(z)\mu_{\parallel}(z/d). \end{equation} For a polydisperse suspension, an additional average over the particle-size distribution \eqref{Particle size distribution} is needed, \begin{equation} \label{polydisperse effective self-diffusivity} \frac{\Dself}{D_0}= \int_0^\infty\textrm{d}d\,p(d) \int_{d/2}^\infty\textrm{d}z\,\particleDistribution_1(z;d)\mu_{\parallel}(z/d). \end{equation} The mobility coefficient $\mu_{\parallel}(z/d)$ was evaluated with high accuracy using the \textsc{Hydromultipole} algorithm for a particle near a single wall \cite{Cichocki-Jones:1998}. The integrals in Eqs.\ \eqref{monodisperse effective self-diffusivity} and \eqref{polydisperse effective self-diffusivity} were performed numerically using the Gauss method, with $\mu_{\parallel}(z/d)$ calculated by a series expansion. \subsubsection{Computations for larger densities} The effective self-diffusion coefficient for suspensions at higher concentrations was evaluated using a periodic version~\cite{Blawzdziewicz-Wajnryb:2008} of the Cartesian-representation algorithm for a system of particles in a parallel-wall channel \cite{Bhattacharya-Blawzdziewicz-Wajnryb:2005a,Bhattacharya-Blawzdziewicz-Wajnryb:2005}. In our approach, periodic boundary conditions in the lateral directions are incorporated by splitting the flow reflected by the particles into a short-range near-field contribution and a long-range asymptotic Hele--Shaw component. The near-field contribution is summed explicitly over neighboring periodic cells, and the Hele--Shaw component is evaluated using Ewald summation method for a 2D harmonic potential \cite{Cichocki-Felderhof:1989,Blawzdziewicz-Wajnryb:2008}. The one-wall results were derived from the two-wall calculations using an asymptotic procedure based on the observation that in the particle-free part of the channel the velocity field tends to a combination of a plug flow and a shear flow. All other flow components decay exponentially with the distance from the particle layer. The one-wall results are obtained by eliminating the shear flow and retaining only the plug flow generated by hydrodynamic forces induced on the particles \cite{Sadlej-Wajnryb-Blawzdziewicz-Ekiel_Jezewska-Adamczyk:2009}. The calculations were performed for the distance to the upper virtual wall $H=10d_0$, which is sufficient to obtain highly accurate one-wall results. The self-diffusion coefficient is determined by averaging the trace of the lateral translational--translational $N$-particle mobility, evaluated using {\sc Hydromultipole} codes based on the above algorithm, with the multipole truncation order $L=2$ \cite{Abade:2010}. The averaging was performed over equilibrium configurations of $N=400$ particles in a 2D-periodically replicated simulation cell. Independent equilibrium configurations were constructed using the MC technique described in Sec.\ \ref{Monte-Carlo simulations}. \section{Structure of the quasi-2D suspensions} \label{sec:structure} \subsection{Experimental results} \label{Structure - experimental results} A typical image of our quasi-2D colloidal suspension is shown in Fig.~\ref{fig:setup}. For each area fraction $\phi$ and salt concentration, the suspension can be characterized by the structure in the $x$--$y$ plane (parallel to the cell floor) and the density profile in the $z$ direction (perpendicular to the floor). In this section we discuss results of our measurements of the microstructure of a sedimented particle layer. \subsubsection{Mean particle height at low area fractions} \label{Mean particle height at low area fractions} As mentioned in Sec.\ \ref{Height calibration}, our imaging techniques do not yield absolute particle heights. To estimate the mean particle distance $z$ from the bottom wall (the mean height) in a dilute suspension layer, we observe particle dynamics in the horizontal directions, and compare measurement results with theoretical calculations of the effect of the wall on the lateral particle diffusion. Using fluorescence imaging, we determine the projection of particle trajectories onto the $x$--$y$ plane, $\mathbf{r}_\parallel(t)$, and extract the effective self-diffusion coefficient, \begin{equation} \label{self diffusivity from mean-square displacement} \Dself=\langle \Delta \textbf{r}_\parallel^2 (\tau)\rangle/(4 \tau), \end{equation} where $\tau$ is the time interval. The position-dependent diffusivity $D(z)$ in the $x$--$y$ plane of a single particle near a planar wall is given by the following expansion in the particle--wall distance \cite{Happel,Perkins1992}, \begin{eqnarray} \frac{D(z)}{D_0} = 1 &-& \frac{9}{32}\frac{d}{z} +\frac{1}{64}\left( \frac{d}{z}\right)^3 \label{Eq:ds_zero} \\ \nonumber &-& \frac{45}{4096}\left( \frac{d}{z}\right)^4 -\frac{1}{512}\left( \frac{d}{z}\right)^5, \end{eqnarray} where $z=0$ is the wall position. The expansion \eqref{Eq:ds_zero} is accurate within 5\% to 1\% as $z$ increases from $0.51d$ up to $d/2+2l$, in the range where sedimented particles spend most of the time in a low-density suspension under equilibrium conditions. Here $l\approx0.16d$ is the sedimentation length \eqref{ldef}. From expression \eqref{Eq:ds_zero} and $\Dself$ extracted according to \eqref{self diffusivity from mean-square displacement}, we can calculate the suspension's mean distance from the wall (where $z$ in \eqref{Eq:ds_zero} is replaced by a mean value $\langle z \rangle$). This calculation holds in the limit $\phi \to 0$, where there are no particle-particle interactions. We measure $\Dself$ from the particle trajectories, $\mathbf{r}_\parallel(t)$, in extremely low area fraction solution, $\phi<0.003$ (in salt-free water), and obtain a mean distance from the wall $\langle z \rangle= 1.1 \pm 0.1\,\mu$m, corresponding to a mean gap $\epsilon=z-d/2$ of 0.3--0.4\,$\mu$m between the particle surface and the wall. We also extract $\langle z\rangle$ for different salt concentrations by extrapolating $\Dself$ (measured at various area fractions) to $\phi=0$ (see Sec.~\ref{Dynamics - experimental results}), obtaining $\langle z\rangle =0.95 \pm 0.05\,\mu$m for $\KCl=0.01\Mol$ and $\langle z\rangle =1.11 \pm 0.05\,\mu$m for $\KCl=0\Mol$. The latter matches the average height extracted from the diffusion of tracers in the extremely low density suspension. For $\lambda=5$ nm (added salt), the mean height calculated from the Boltzmann distribution \eqref{low-density probability distribution} is dominated by the exponential decay due to gravity and is practically independent of $B$ in the particle--wall potential \eqref{Eq:boltzmann}. For $B=0$, using the particle mass as determined from Eq.\ \eqref{m} with no fitting parameters, we get $\langle z\rangle = 0.99$ $\mu$m, in agreement with the diffusivity-based measurements of $\langle z\rangle$. This result confirms that in the added-salt case we can neglect the electrostatic repulsion from the wall. For the salt-free case, taking $\lambda=50$ nm, we obtain $\langle z\rangle = 1.11$ $\mu$m for $B$ in the range 5--15 $k_{\rm B}T$. These values are consistent with those obtained by fitting the measured height distribution to the theoretical expression \eqref{Eq:boltzmann} (see Sec.~\ref{Height calibration}). Since Eq.\ \eqref{Eq:ds_zero} does not include lubrication correction for small particle--wall gaps $\epsilon$, it overpredicts $D(z)$ for $z<0.51d$; however, the accuracy of the approximation is sufficient for the purpose of the present estimates. In our calculations discussed in Secs.\ \ref{sec:num_met} and \ref{Dynamics - numerical simulations}, highly accurate \textsc{Hydromultipole} results were used instead of the far-field approximation \eqref{Eq:ds_zero}. \begin{figure} \centering \includegraphics[clip=true,scale=0.5]{figure4ja} \caption{Radial distribution function $g(r)$ in the $x$--$y$ plane for experiments in (a) salt-free and (b) salt-added ($\KCl=0.01\Mol$) water. The distribution $g(r)$ was calculated separately in the first and second layers (see Sec.~\ref{Occupation of particle layers} for the layer definition) and combined with appropriate weights. } \label{fig:gofr} \end{figure} \subsubsection{Radial distribution in the horizontal plane} To verify that no crystalline or hexatic structures are formed at higher values of the area fraction, we evaluate from the experiment the radial distribution function $g(r)$ and the full 2D pair distribution $g(r,\theta)$ in the $x$--$y$ plane, for both the salt-free and salt-added suspensions. No dependence on $\theta$ was found. The radial distribution $g(r)$ for several values of the area fraction $\phi$ is shown in Fig.\ \ref{fig:gofr}\subfig{a} for the salt-free system and in \ref{fig:gofr}\subfig{b} for the salt-added system. For monodisperse hard spheres the first peak of $g(r)$ should correspond to the diameter of the sphere. Our measurements show that the first peak is at $r = 1.68~\mu$m for suspensions without salt and at $r = 1.60~\mu$m for suspensions with $\KCl=0.01\Mol$. The difference between these two numbers implies that the effective shell around the particles in the salt-free samples is around ~40-50~nm, which provides an estimation for the screening length in DDW without the addition of salt. This estimate of $\lambda$ is consistent with the other two mentioned above. \subsubsection{Vertical density profile} The height distributions $\particleDistribution(z)$ of the silica particles at different area fractions of the sedimented particle layer were acquired using confocal imaging and conventional image analysis \cite{Crocker1996}. These distributions for salt-added suspensions with $\KCl=0.01\Mol$ are plotted in Fig.~\ref{fig:zhist}\subfig{a} for several values of the area-fraction $\phi$. Since we cannot precisely measure the position of the wall, the distributions are shifted so that their first peak (close to the wall) is located at $z=0$. These distributions indicate the formation of a second layer of particles for area fractions $\phi \gtrsim 0.26$. The observed center of the second layer is located $\Delta z\approx 0.75\,\mu$m above the center of the first layer. The layer separation is thus significantly smaller than the expected separation $\Delta z\approx d=1.5\,\mu$m (which is similar to the peak separation for the radial distribution). See further discussion in Secs.\ \ref{sec:discuss} and Appendix \ref{appendix}. \begin{figure}[t] \centering \includegraphics[clip=true,scale=0.5]{figure5sja} \caption{(a) Height probability distribution of the silica colloids (in $\KCl=0.01\Mol$) for increasing area fraction reveals the formation of a second layer. Colors correspond to different area fractions (as labeled). (b) For the most dense suspensions [$\phi=0.54$ and $\phi=0.48$ (inset)], the height distribution (black solid line) around the two peaks can be fitted to two Gaussian functions (blue lines). The intersection of the two Gaussians defines an effective boundary (red broken line) between the first and second layers; occupation percentages are indicated. } \label{fig:zhist} \end{figure} To highlight the onset of the formation of the second layer, we look at the subtraction of the height probability distribution of the lowest area fraction from the distribution of all area fractions, $\Delta \rho \equiv \rho-\rho_{\,\phi=0.054}$ [Fig.~\ref{fig:onset}\subfig{a}]. Two phenomena are expected when a second layer is formed: (i) negative values at $z=0~\mu$m, corresponding to a reduction in the fraction of particles populating the first layer, (ii) positive and increasing values at $z=0.75~\mu$m, corresponding to the formation and increasing population of the second layer. The values of $\Delta\rho$ at $z=0$ and $0.75~\mu$m are plotted in Fig.~\ref{fig:onset}\subfig{b}. The two expected phenomena are observed at approximately $\phi \sim 0.3$, indicating the area fraction above which a second layer becomes occupied. At area fractions smaller than 0.3 we still obtain negative values of $\Delta \rho$ at $z=0~\mu$m, and positive values at $z=0.75~\mu$m, however these values are relatively low and can correspond to the broadening of the exponential distribution due to increase in $\phi$. \begin{figure} \centering \includegraphics[clip=true,scale=0.5]{figure5s_extra4} \caption{(a) The difference between the height probability distribution at increasing area fractions and the distribution at the lowest area fraction $\phi =0.054$, $\Delta \rho \equiv \rho-\rho_{\, \phi=0.054}$ (in $\KCl=0.01\Mol$). Colors are as in Fig~\ref{fig:zhist}\subfig{a}. Gray (black) dashed line corresponds to $z=0.75~\mu$m ($z=0~\mu$m). (b) Values of $\Delta \rho$ at $z=0~\mu$m (red squares) and $z=0.75~\mu$m (blue circles) for all area fractions. Both plots exhibit a change in trend at area fraction $\phi \sim 0.3$.} \label{fig:onset} \end{figure} \subsubsection{Particle-layer occupation fractions} \label{Occupation of particle layers} For the area fractions at which a clear second peak in the particle distribution $\rho$ is seen in Fig.\ \ref{fig:zhist}\subfig{a} (i.e., for $\phi=0.48$ and $0.54$), we fit the area around each peak to a Gaussian function and define the point of intersection between the two Gaussians as the effective boundary between the two layers. Figure~\ref{fig:zhist}\subfig{b} shows the two distributions with the Gaussian fits and our definition of that boundary, which turns out to be at a distance of $0.39 \pm 0.04~\mu$m above the peak of the first layer in both densities. Using this boundary, we evaluated the occupation fractions $\occupationFraction{i}=\phi_i/\phi$ of the bottom ($i=1$) and top layer ($i=2$), where $\phi_i$ is the area fraction of particles in layer~$i$. The results are shown in Fig.~\ref{fig:occ}\subfig{a} for a suspension in $\KCl=0.01\Mol$ solution as a function of the total area fraction $\phi$. As expected, the fraction of particles populating the second layer grows as the total area fraction of the suspension is increased. An additional independent measurement of the layer occupation fractions is done using epifluorescence microscopy, which enables us to image the different layers separately [see Fig.~\ref{fig:setup}\subfig{a}]. The occupation fraction of each layer is determined by counting the number of particles observed therein. The occupation fractions measured using the epifluorescence imaging technique are plotted in Fig.\ \ref{fig:occ}\subfig{a} along with the results obtained from the confocal microscopy. The two methods yield similar results. \begin{figure}[t] \centering \includegraphics[clip=true,scale=0.5]{figure7sja} \caption{Occupation fractions $f_i$ and area fractions $\phi_i$ of suspension layers as a function of the total area fraction $\phi$; circles (squares) correspond to the first (second) layer. (a) Occupation fractions for experiments in $\KCl=0.01\Mol$ extracted from the confocal height distribution curves (yellow) and from the 2D images (green), showing good agreement between the two methods. (b) The same data replotted for the area fractions $\phi_1$ and $\phi_2$ of the first and second layers. } \label{fig:occ} \end{figure} Alternatively, we can represent the layer-occupation results in terms of the area fractions $\phi_1$ and $\phi_2$ of the first and second layers [see Fig.~\ref{fig:occ}\subfig{b}]. Both $\phi_1$ and $\phi_2$ increase as $\phi$ is increased, and $\phi_1$ seems to saturate at $\phi>0.45$. \subsection{Numerical simulations} \label{Structure - numerical simulations} \begin{figure} \includegraphics[width=\ffraction\textwidth]{equilibrium_distributions_B=0_00.eps} \includegraphics[width=\ffraction\textwidth]{equilibrium_distributions_B=0_15.eps} \caption{Particle--wall distribution function for \subfig{a} monodisperse suspension; \subfig{b} polydisperse suspension with standard deviation of particle diameter $\sigma/d_0=0.15$. Simulation results for area fraction $\phi=0$ (solid line), 0.3 (dashed), 0.6 (dotted), 0.9 (dot--dashed), 1.2 (long-dashed). The insets show the deviation $\Delta\rho=\rho-\rho_{\phi=0}$ from the low-density distribution. \label{particle-wall distributions simulations}} \end{figure} \begin{figure} \includegraphics[width=\ffraction\textwidth]{equilibrium_distributions_phi=0_054.eps} \includegraphics[width=\ffraction\textwidth]{equilibrium_distributions_phi=0_540.eps} \caption{Particle--wall distribution function for area fractions \subfig{a} $\phi=0.054$ and \subfig{b} $0.54$. Experimental results (solid circles); simulation results for standard deviation of particle diameter $\sigma/d_0=0$ (solid line), $0.1$ (dashed), $0.15$ (dotted), $0.20$ (dashed--dotted), and $0.25$ (long-dashed). \label{particle-wall distributions: comparison with experiments}} \end{figure} \begin{figure} \includegraphics[width=\ffraction\textwidth]{layer_occupancy.eps} \includegraphics[width=\ffraction\textwidth]{layer_diam.eps} \caption{\subfig{a} Occupation fraction $\occupationFraction{i}$ and \subfig{b} normalized average particle diameter $d_i$ in the first particle layer (black), second layer (blue), and third layer (green), vs the total area fraction $\phi$. The results for a monodisperse system (solid lines) and polydisperse systems with standard deviation of particle diameter $\sigma/d_0=0.1$ (dashed), $0.15$ (dotted) $0.20$ (dash--dotted), and $0.25$ (long-dashed). The symbols represent experimental results from confocal imaging (circles) and 2D images (squares) for a suspension with salt concentration $\KCl=0.01\Mol$. Note that the experimental second layer corresponds to the sum of the second and third layers in the MC simulations. The layer boundaries in the numerical calculations are set at $z_1=0.9d_0$, and $z_2=1.8d_0$ and in the experiments are obtained from Gaussian fitting (see Fig.\ \ref{fig:zhist}). \label{occupation numbers}} \end{figure} \begin{figure} \includegraphics[width=\ffraction\textwidth]{rhoLowDensLog.eps} \caption{Low-density limit of the near-wall particle distribution for a monodisperse system (solid line) and polydisperse systems with standard deviation of particle diameter $\sigma/d_0=0.1$ (dashed), $0.15$ (dotted) $0.20$ (dash--dotted), and $0.25$ (long-dashed). \label{low-density distribution}} \end{figure} Here we present results of MC simulations of the equilibrium microstructure of a HS suspension in the near wall region. The HS potential corresponds to the system with $\KCl=0.01\Mol$, for which the electrostatic repulsion is negligible. Since the suspension used in the experiments is polydisperse, we consider both monodisperse and polydisperse systems. \subsubsection{Near-wall particle distribution} Figure \ref{particle-wall distributions simulations} shows MC results for the suspension density profile $\particleDistribution(z)$ for a monodisperse suspension and polydisperse suspensions at several area fractions. Similarly to the experimental results, the simulations show that there is a single layer of sedimented particles at low area fractions $\phi$, and a two-layer microstructure at higher area fractions. (Development of a third layer for $\phi\gtrsim0.9$ is also noticeable in the region $z/d_0 \gtrsim 2$.) Suspension polydispersity results in broadening of the peaks of the particle distribution. A direct comparison between the experimental and simulation results is presented in Fig.\ \ref{particle-wall distributions: comparison with experiments} for two values of the area fraction $\phi$. At low area fractions [Fig.\ \ref{particle-wall distributions: comparison with experiments}\subfig{a}] the agreement between the experiments and simulations is good. (The standard deviation of the particle-size distribution for which the simulations match the experimental data, $\sigma/d_0\approx 0.25$, is larger than the estimated standard deviation $0.1<\sigma/d_0<0.15$ based on the manufacturer's specifications; the additional spread of the experimentally observed peak can be attributed to random errors of the particle height evaluation from the confocal-microscopy images.) A comparison of the numerical and experimental results at a higher area fraction, as shown in Fig.\ \ref{particle-wall distributions: comparison with experiments}\subfig{b} [also see Figs.\ \ref{fig:zhist} and \ref{particle-wall distributions simulations}], reveals that (\textit{i}) the experimentally observed second maximum of the density distribution develops at lower area fractions than the corresponding maximum in the numerical simulations; (\textit{ii}) the experimental second peak is narrower, and its position is shifted towards the wall. In contrast, the plots of the excess distribution $\Delta\rho$ with respect to the low-density limit, shown in Fig.\ \ref{fig:onset}\subfig{a} and the insets of Fig.\ \ref{particle-wall distributions simulations}, indicate that the onset of the formation of the second layer occurs at approximately the same area fraction according to the simulations and experiments. Moreover, the measured and calculated occupation fractions of the layers are similar for all area fractions, as depicted in Fig.\ \ref{occupation numbers}\subfig{a}. A possible source of the observed discrepancies between the experimental and numerical results for the particle distribution $\rho(z)$ is described in Appendix \ref{appendix}. It also provides a plausible explanation of the fact that the agreement between the experiments and MC simulations for the layer occupation fractions $f_i$ is quite good in spite of the discrepancies for $\particleDistribution(z)$. \subsubsection{Polydispersity effects} The results in Fig.\ \ref{occupation numbers}\subfig{a} show that the occupation fraction of the first two layers is relatively insensitive to the suspension polydispersity; in contrast, the occupation fraction of the third layer strongly increases with the standard deviation of particle diameter. This increase stems from the presence of smaller (lighter) particles in polydisperse systems: smaller particles tend to migrate into the top layer, as evident from Fig.\ \ref{occupation numbers}\subfig{b}. For dilute suspensions, the particle-size segregation results in variation of the slope of $\log \particleDistribution(z)$ with the distance from the wall, as illustrated in Fig.\ \ref{low-density distribution}. We estimate that this variation causes an approximately 20\,\% uncertainty of the calibration of the confocal height measurements described in Sec.\ \ref{Height calibration}. \subsection{A quasi-2D model of the equilibrium layered microstructure} \label{A quasi-2D model of equilibrium layered microstructure} Here we present a semi-quantitative theoretical model for evaluating the occupation fractions $\occupationFraction{i}$ of the particle layers in a sedimented colloidal suspension. Our theory is based on the assumption that the suspension microstructure can be approximated as a collection of weakly coupled quasi-2D layers in thermodynamic equilibrium with respect to particle exchange. The equilibrium condition for layers $i$ and $i+1$ is \begin{equation} \label{equilibrium between layers} \mu_i+mgz_i=\mu_{i+1}+mgz_{i+1}, \end{equation} where $\mu_i$ is the chemical potential of layer $i$, and $z_i$ is its position. In our model, $\mu_i$ is approximated as the chemical potential of a 2D hard-disk fluid of area fraction $\phi_i$. All disk diameters are equal to the sphere diameter $d$, which corresponds to a layer of spheres with the same vertical position $z$. In the low area-fraction limit, the chemical potential of a hard-disk fluid is \begin{equation} \label{chemical potential at low densities} \mu_i=k_{\rm B}T\ln\phi_i+C(T), \end{equation} where $C(T)$ depends only on the temperature $T$. According to the equilibrium condition \eqref{equilibrium between layers} and equation of state \eqref{chemical potential at low densities}, we thus have \begin{equation} \label{area fraction in the next layer} \phi_{i+1}=r\phi_i,\qquad i=1,2,\ldots \end{equation} with the ratio $r$ given by the Boltzmann factor \begin{equation} \label{occupancy ratio} r=\textrm{e}^{-\Deltaz/l}, \end{equation} where $l$ is defined by Eq.\ \eqref{ldef} and $\Deltaz=z_{i+1}-z_i$. We assume that the layer separation $\Deltaz$ is independent of $i$. For finite area fractions, relation \eqref{area fraction in the next layer} is replaced with \begin{equation} \label{iteration for phi_i} \phi_{i+1}=r(\phi_i)\phi_i,\qquad i=1,2\ldots, \end{equation} where the layer occupation ratio $r$ depends on the area fraction in the adjacent layers. The factor $r(\phi)$ is determined from the equilibrium condition \eqref{equilibrium between layers} with the help of the Gibbs--Duhem relation \begin{equation} \label{Gibbs-Duhem} \textrm{d}\mu=\frac{\pi d^2}{4}\phi^{-1}\textrm{d} p, \end{equation} where $p$ is the lateral 2D pressure within the layer. Combining \eqref{equilibrium between layers}, \eqref{iteration for phi_i}, and \eqref{Gibbs-Duhem} yields \begin{equation} \label{differential equation for r} \frac{\textrm{d}r}{\textrm{d}\phi}= r\left[\frac{p'(\phi)}{p'(r\phi)}-1\right], \end{equation} where \begin{equation} \label{p'} p'=\left(\frac{\partial p}{\partial\phi}\right)_T. \end{equation} The differential equation \eqref{differential equation for r} is solved for $r=r(\phi)$ with the boundary condition \eqref{occupancy ratio} at $\phi=0$. Occupation fractions $\occupationFraction{i}=\phi_i/\phi$ are then determined by iteration, applying Eq.\ \eqref{iteration for phi_i} and the relation \begin{equation} \label{total area fraction on terms of partial fractions} \phi=\sum_{i=1}^\infty\phi_i. \end{equation} \begin{figure} \includegraphics[width=\ffraction\textwidth]{layer_occupancy_theory.eps} \caption{Occupation fraction $f_i$ for the first (black), second (blue) and third (green) layer vs the total area fraction $\phi$ for a monodisperse HS suspension. Theoretical results (solid lines); MC simulations (symbols). \label{quasi 2D model}} \end{figure} We have solved Eq.\ \eqref{differential equation for r} and determined the occupation fractions $\occupationFraction{i}$ using the scaled-particle-theory equation of state for hard disks \cite{Helfand-Frisch-Lebowitz:1961}, \begin{equation} \label{SPT pressure} \frac{\pi d^2 p}{4k_{\rm B}T}=\frac{\phi}{(1-\phi)^2}. \end{equation} The results of our calculations are presented in Fig.\ \ref{quasi 2D model} for a HS system with the same value of the sedimentation length \eqref{value of dimensionless sedimentation constant} as in our MC simulations. Based on the separation between the first and second peak of the suspension density profile shown in Fig.\ \ref{particle-wall distributions simulations}\subfig{a}, the calculations were performed for $\Deltaz/d=1$. The theoretical results in Fig.\ \ref{quasi 2D model} are compared with the MC simulations of a monodisperse HS suspension with the boundaries between the layers set to $z_1=d$ and $z_2=2d$, consistent with the peak positions. The agreement between our simple theory and simulations is quite good. A similar agreement was obtained for other values of the dimensional parameter $l/d$ (results not shown). The layer boundaries used in Sec.\ \ref{Structure - numerical simulations} and \ref{Dynamics - numerical simulations} to compare the MC results with experiments differ from the boundaries used in the above model by approximately 10\,\%. Due to the observed deviation between the measured and simulated particle distributions (see Fig.\ \ref{particle-wall distributions: comparison with experiments}), it is not possible to define the layer boundaries in a unique, equivalent way for the experimental and simulated systems. Therefore, the layer boundaries $z_1=0.9d$ and $z_2=1.8d$ used in Sec.\ \ref{Structure - numerical simulations} and \ref{Dynamics - numerical simulations} were chosen based on the comparison between the experimental and numerical results for the occupation fractions and self-diffusivities in particle layers. \section{Particle dynamics} \label{sec:dynamics} \begin{figure} \centering \includegraphics[clip=true,scale=0.5]{figure8ja} \caption{Short-time self-diffusion coefficient $\Dself$, normalized by the Stokes-Einstein diffusion coefficient $D_0$, as a function of the total area fraction $\phi$; circles (squares) correspond to the first (second) layer, and triangles to the effective $\Dself$ calculated by the weighted average of the self-diffusion coefficients for the two individual layers. (a) Suspension with no added salt. (b) Suspension with salt concentration $\KCl=0.01\Mol$. } \label{fig:def_gaus} \end{figure} \subsection{Experimental results} \label{Dynamics - experimental results} The short time self-diffusion coefficient in the $x$--$y$ plane, $\Dself$, is determined for different total area fractions of the sedimented particles by extracting the mean square displacement \eqref{self diffusivity from mean-square displacement} from 2D epifluorescent images of the first and second particle layer. The mean-square displacement is measured over a time interval $\tau$ that is small compared to the structural relaxation time of the suspension, to ensure that the measurements yield the short-time self-diffusion coefficient. The results are shown in Fig.~\ref{fig:def_gaus} for suspensions with salt concentration $\KCl=0.01\Mol$ and salt-free suspensions with $\KCl=0\Mol$. The self-diffusion coefficient is expected to decrease as the particle concentration increases; indeed, we observe this decrease for both salt concentrations and in both layers, for $\phi<0.4$. In the case of $\KCl=0.01\Mol$, corresponding to $\lambda = 5$ nm, the particles can get much closer to the cell floor, which in turn results in lower values of the self-diffusion coefficient compared to suspensions with $\KCl=0\Mol$. Using a linear fit to the values of $\Dself/D_0$ for the low area fractions, where there is no observable second layer, we can extrapolate to $\phi=0$ and extract the self-diffusivity of a single particle. The extrapolated results agree well with the measurements at very low concentrations $\phi<0.003$, as discussed in Sec.\ \ref{Mean particle height at low area fractions}. From the known occupation fractions $\occupationFraction{1}$ and $\occupationFraction{2}$ for each $\phi$ we can weigh the contribution of each layer to the total self-diffusivity, and construct an effective $\Dself$ of the whole suspension (Fig.~\ref{fig:def_gaus}). As expected, for $\phi<0.4$ the effective self-diffusion coefficient $\Dself$ decreases as $\phi$ is increased in both salt concentrations. For larger $\phi$ we observe a flattening of $\Dself$, which clearly indicates that the second layer becomes dominant in those area fractions. This observation is supported also by the saturation of $\phi_1$ at $\phi>0.45$ [Fig.~\ref{fig:occ}\subfig{b}]. \subsection{Numerical simulations} \label{Dynamics - numerical simulations} \begin{figure}[b] \includegraphics[width=\ffraction\textwidth]{mobilities_with_inset.eps} \caption{Normalized short-time self-diffusion coefficient $\Dself/D_0$, as a function of the area fraction $\phi$ for a suspension with salt concentration $\KCl=0.01\Mol$. The main panel shows $\Dself$ averaged over the whole system, and the inset shows $\Dself$ for the first (bottom, red) and second (top, blue) particle layer. Experimental results (solid circles); simulation results (open symbols) for a monodisperse system (circles) and polydisperse systems with the standard deviation of the particle diameter $\sigma=0.1d_0$ (triangles) and $0.15d_0$ (squares). Note that at low area fractions the triangles overlap with the solid circles. The lines are a guide for the eye. }\label{self-diffusion with salt} \end{figure} \begin{figure} \includegraphics[width=\ffraction\textwidth]{no_salt_mobilities_with_inset.eps} \caption{Normalized short-time self-diffusion coefficient $\Dself/D_0$, as a function of the area fraction $\phi$ for a suspension with no salt. Symbols are the same as in Fig.\ \ref{self-diffusion with salt}. Results are shown only for monodisperse suspensions. \label{self-diffusion with no salt}} \end{figure} \begin{figure} \includegraphics[width=\ffraction\textwidth]{zero_dens_mob_poly_hs.eps} \caption{Normalized short-time self-diffusion coefficient $\Dself/D_0$ in the low-area-fraction limit $\phi=0$ (for the system with salt) as a function of the suspension polydispersity. \label{self-diffusion low area fraction}} \end{figure} The results of our numerical simulations for the short-time lateral self-diffusion coefficient $\Dself$ in a HS system are presented in Fig.\ \ref{self-diffusion with salt} for a monodisperse suspension and for polydisperse suspensions with $\sigma/d_0=0.1$ and $0.15$. Figure \ref{self-diffusion with no salt} shows the corresponding results for a system of monodisperse hard spheres with particle--wall and particle--particle electrostatic repulsion \eqref{V} and \eqref{V'}. The results depicted in Fig.\ \ref{self-diffusion with salt} indicate that for moderately polydisperse suspensions (in the range corresponding to the polydispersity of silica particles used in the experiments), the self-diffusion coefficient is only moderately dependent on $\sigma/d_0$. For larger values of the variance of particle diameters, the normalized self-diffusion coefficient $\Dself/D_0$ significantly increases with the degree of the polydispersity, because the mobility is dominated by small particles. This increase is illustrated in Fig.\ \ref{self-diffusion low area fraction} for a suspension in the low-area-fraction limit $\phi=0$. The results of our hydrodynamic calculations for a HS suspension and for a suspension with screened electrostatic repulsion are compared with experimental results for suspensions with $\KCl=0.01\Mol$ (Fig.\ \ref{self-diffusion with salt}) and $\KCl=0\Mol$ (Fig.\ \ref{self-diffusion with no salt}). For the system with salt, the measured values are slightly closer to their numerical analogs than in the absence of salt. Summarizing, the experimental and numerical results agree well both for the overall self-diffusivity and for the self-diffusivity in individual particle layers. \section{Discussion} \label{sec:discuss} In this paper we have studied in detail the structure and dynamics of quasi-2D colloidal suspensions near a wall, comparing experiment and theory. Our central result is a rather sharp formation of a distinct second layer at an area fraction of $\phi\sim0.3$. This value is much lower than the area fraction required for close-packing or other 2D structural changes such as the formation of hexatic or crystalline order. One important consequence of this result concerns the apparent self-diffusion of the particles in the suspension and its dependence on particle density. Due to the higher mobility of the particles in the elevated layer, the effective diffusivity is higher and levels off as particle density increases. The experimentally observed behavior could be interpreted incorrectly if one is unaware of the layering (or stratifying) effect. We find good agreement between experimental and simulation results for the occupation fractions of the first and second layers and for the lateral self-diffusivity (both for the entire suspension and in the individual layers). However, we also find an unexpected discrepancy in the position and the height of the second peak in the near-wall particle distribution. While the source of this discrepancy is unknown, one possibility, related to optical aberrations, is suggested in Appendix \ref{appendix}. On the other hand, the difference between theory and experiment might also be a result of an actual physical effect, such as more complicated electrostatic interactions setting in at higher layer densities. Another new insight put forth in this study is the significant effect that polydispersity has on the occupation and composition of layers close to the bottom wall, even in the case of a relatively small dispersion of particle sizes. The effect of polydispersity is evident already at low densities, since the smaller and larger particles segregate into the upper and lower layers, respectively. We expect the phenomena described here to be quite general and to be manifested in any such system where the sedimentation length $l$ is of the order of the particle diameter. This conclusion is supported by the appearance of the phenomena both in experiments and in Monte--Carlo simulations. An important outcome of this paper is the construction of a very simple theoretical quasi-2D model of the layered microstructure in thermodynamic equilibrium. Such systems have been analyzed earlier using density-functional theory \cite{Chen2006}, but our theoretical model is much simpler and easier to apply. We have demonstrated that the model approximates well the experimental and numerical results for the system studied in this work. We conclude with three open issues. Layering phenomena near a wall are well documented in 3D suspensions as well \cite{VanWinkle1988,GonzalezMozuelos1991,Zurita_Gotor-Blawzdziewicz-Wajnryb:2012}. An interesting question is whether this perturbation to the 3D pair correlation function could be fundamentally related to the sequential layering reported here. The structural features near the wall should also affect two- and many-particle dynamics in the quasi-2D suspensions, which can be characterized by two-point microrheology. Finally, taking a more detailed account of interparticle forces such as strong electrostatic interactions may hopefully provide deeper understanding of the effects observed in this work. \acknowledgments H.D. wishes to thank the Polish Academy of Sciences for its hospitality. This research has been supported by the Israel Science Foundation (Grants No.\ 8/10 and No.\ 164/14) and by the Marie Curie Reintegration Grant (PIRG04-GA-2008-239378). A.S.--S acknowledges funding from the Tel-Aviv University Center for Nanoscience and Nanotechnology. M.L.E.--J. and E.W. were supported in part by Narodowe Centrum Nauki (National Science Centre) under grant No. 2012/05/B/ST8/03010. J.B. would like to acknowledge the financial support from National Science Foundation (NSF) Grant No. CBET 1059745.
{ "timestamp": "2015-04-14T02:11:27", "yymm": "1504", "arxiv_id": "1504.03072", "language": "en", "url": "https://arxiv.org/abs/1504.03072" }
\section{Introduction} The origin of high energy cosmic rays (CRs) has been a long-standing problem in astrophysics. Especially, CRs with energy above $\gtrsim 10^{19}~{\rm eV}$ are considered to come from extra-Galactic sources such as active galactic nuclei (AGNs) and gamma-ray bursts (GRBs). In these sources we expect the production of high energy neutrinos ($\gtrsim 0.1~{\rm TeV}$) through interactions of accelerated protons with the ambient photons ($p\gamma$ interactions) or gas ($pp$ or $pn$ interactions) \cite{waxmanbahcall97, murase+06, muraseioka13}. Detection of these neutrinos can provide us new information about high energy cosmic ray sources as well as the acceleration processes. In cosmic ray accelerators, high energy neutrinos are mainly produced from the decay of charged pions: $\pi^+ \rightarrow \mu^+ + \nu_{\mu} \rightarrow e^+ +\nu_{\mu} +\nu_e +\bar{\nu}_{\mu}$ and $\pi^- \rightarrow \mu^- + \bar{\nu} \rightarrow e^- + \nu_{\mu} +\bar{\nu}_e + \bar{\nu}_{\mu}$. Therefore, the flavor ratios of these neutrinos are expected to be \begin{eqnarray} \Phi_{\nu_e}^0:\Phi_{\nu_{\mu}}^0:\Phi_{\nu_{\tau}}^0=1:2:0, \end{eqnarray} at the sources, where $\Phi_{\nu_i}^0$ denotes the flux of $\nu_i$ and $\bar{\nu}_i$ ($i=e$, $\mu$ or $\tau$). The observed flavor ratios become $\Phi_{\nu_e}:\Phi_{\nu_{\mu}}:\Phi_{\nu_{\tau}}=1:1:1$ after the neutrino oscillations during the propagation to the Earth \cite{learnedpakvasa95}. However, this argument may be too naive because we should take into account the finiteness of the decay timescale of pions $\pi^{\pm}$ and muons $\mu^{\pm}$. For example, if the cooling timescale \citep{kashtiwaxman05, lipari+07, tamborraando15} or acceleration timescale \citep{murase+12, klein+13, winter+14, reynoso14} of a pion or a muon is shorter than the decay timescale, the spectral shape of neutrinos produced from the decay of those particles would be significantly modified. Especially, because the decay times are different between pions and muons, the energy dependence of neutrino fluxes would be different from flavor to flavor. The observed neutrino flavor ratio may be also modified by neutron decay \cite{anchordoqui+04} and new physics such as neutrino decay \cite{beacom+03, baerwald+12}, sterile neutrinos \cite{athar+00}, pseudo-Dirac neutrinos \cite{beacom+04, esmaili10}, Lorentz or {\it CPT} violation \cite{hooper+05}, quantum-gravity \cite{anchordoqui+05} and secret interactions of neutrinos \cite{iokamurase14}. Recently the high energy neutrino detector, IceCube, has discovered $30~{\rm TeV}-2~{\rm PeV}$ neutrinos \cite{aartsen+14} which are confirmed to be non-atmospheric at the level of $5.7\sigma$. The IceCube team has also analyzed the flavor composition of astrophysical neutrinos in the energy range of $35~{\rm TeV}-1.9~{\rm PeV}$ and demonstrate consistency with $\Phi_{\nu_e}:\Phi_{\nu_{\mu}}:\Phi_{\nu_{\tau}}=1:1:1$ (although the best fit composition is $0:0.2:0.8$). In the near future, the next generation IceCube-Gen2 and KM3Net experiments will enable the precise study of the energy spectrum of high energy neutrinos and their flavor composition. There is no study about how the flavor ratio of observed neutrinos as well as its energy dependence are modified by the acceleration of pions and muons, although several authors have investigated the neutrino energy spectrum under the secondary-acceleration \cite{murase+12, klein+13, winter+14, reynoso14}. In addition, their approaches are based on one-zone \cite{klein+13, winter+14} or two-zone \cite{reynoso14} approximations, that is, they do not consider the spatial distribution of secondary particles (pions and muons) and their transport across the shock. In this work, we investigate the acceleration of pions and muons produced by protons by solving their convection-diffusion equations around the shock front taking into account their decay into other particles (i.e., pions into muons and muon neutrinos, and muons into muon neutrinos and electron neutrinos). The shock acceleration of secondary particles has been discussed in the context of the positron excess \cite{blasi09, blasiserpico09, mertschsarkar09, ahlers+09} observed by PAMELA/Fermi LAT/AMS-02 \citep{adriani+11, ackermann+12, aguilar+13} (see also \citep{ioka10, kawanaka+10, kawanaka12}). We develop their formalism by including the decay of secondary particles during their acceleration, and evaluate the energy spectrum of neutrinos produced from those particles. This paper is organized as follows. In Section 2 we formulate a general model for the shock acceleration of pions and muons, which are produced from shock-accelerated protons via photomeson interactions, and give versatile expressions of the neutrino spectra for later application. In Section 3, we show that the acceleration of pions/muons would be possible in low-power GRBs occurring inside their progenitors, and apply our model to that case to compute the neutrino spectra and flavor ratios. Our results and discussions are summarized in Section 4. In Appendix A, we summarize general solutions of the convection-diffusion equations for pions (decaying secondary particles) and in Appendix B for muons (decaying tertiary particles). \section{Model} In this section, we describe the shock acceleration of secondary pions and muons that are generated from the protons accelerated at the shock, and investigate the energy spectrum of neutrinos produced by those pions and muons without specifying a particular source. Hereafter we neglect the energy loss of particles due to synchrotron emission and inverse Compton scattering for simplicity (see discussions in Sec. 4). In the shock rest frame, the transport and shock acceleration of particles decaying into other kinds of particles with a timescale $\tau_i$ can be described by the convection-diffusion equation, \begin{eqnarray} u\frac{\partial f_i}{\partial x}&=&\frac{\partial}{\partial x}\left[ D(p)\frac{\partial f_i}{\partial x} \right] +\frac{p}{3}\frac{du}{dx}\frac{\partial f_i}{\partial p}-\frac{f_i}{\tau_i}+Q_i(x,p), \label{transport} \end{eqnarray} where $f_i(x,p)$ ($i=\pi, \mu$) is the equilibrium distribution function of accelerated particles per unit spatial volume and per unit volume in momentum space, $D(p)$ is the diffusion coefficient, and $u$ is the the velocity of the background fluid. We assume that the shock is non-relativistic and that the distribution functions are stationary and isotropic except for the shock front for simplicity. When the shock is relativistic, we should solve the relativistic version of the convection-diffusion equation taking into account the anisotropy of the particle momentum distribution. The anisotropy is the order of $\beta_-$, and therefore in the mildly-relativistic shock, it would be less than a factor of two. The shock front is set at $x=0$, and the upstream (downstream) region corresponds to $x<0$ ($x>0$). If we ignore the third term of the right-hand side, which describes the loss of particles due to their decay, this is a well-known equation for the usual diffusive shock acceleration of particles \cite{blandfordeichler87, malkovdrury01}. The decay timescales of a pion and a muon are the functions of their energy, \begin{eqnarray} \tau_{\pi}&=&\tau_{\pi,0}\frac{\varepsilon_{\pi}}{m_{\pi}c^2} \nonumber \\ &\simeq& 1.9\times 10^{-2}~{\rm s}~\varepsilon_{\pi, 100{\rm TeV}}, \label{pionlife} \\ \tau_{\mu}&=&\tau_{\mu,0}\frac{\varepsilon_{\mu}}{m_{\mu}c^2} \nonumber \\ &\simeq& 2.1~{\rm s}~\varepsilon_{\mu, 100{\rm TeV}}, \label{muonlife} \end{eqnarray} where $\varepsilon_i=100~{\rm TeV}\varepsilon_{i,100{\rm TeV}}$ is the energy of a particle at the shock rest frame and, $\tau_{\pi,0}\simeq 2.6\times 10^{-8}~{\rm s}$ and $\tau_{\mu,0}\simeq 2.2\times 10^{-6}~{\rm s}$ are the decay timescales of a pion and a muon at their rest frames, respectively. The fourth term of the right-hand side of Eq.(\ref{transport}), $Q_i(x,p)$, is the distribution function of $i$ particles injected per unit time. Here we consider that charged pions are produced in $p\gamma$ interactions, $p\gamma \rightarrow \Delta^+ \rightarrow \pi^+ n$. In this case, $Q_{\pi}(x,p)$ should be given from the distribution function of primary protons. On the other hand, in the case of muons, since they are produced by the decay of pions, $Q_{\mu}(x,p)$ should be given from the distribution of pions (see below). Hereafter, we assume the Bohm-type diffusion, \begin{eqnarray} D(p)=\frac{\eta c^2 p}{3eB}, \end{eqnarray} where $e$ is the charge of a particle, $B$ is the magnetic field strength and $\eta$ is the gyrofactor, which is equal to unity in the Bohm limit \cite{drury83}. The fluid velocity is given by \begin{eqnarray} u(x)=\left \{ \begin{array}{ll} u_- & (x \leq 0), \\ u_+ & (x>0), \\ \end{array} \right. \end{eqnarray} where $u_-$ and $u_+$ are constants and the compression ratio is $\sigma=u_-/u_+>1$. One should solve Eq.(\ref{transport}) taking the following boundary conditions into account: \begin{eqnarray} &{\rm (i)}& \lim_{x \to -0}f_i=\lim_{x \to +0}f_i, \label{cd1} \\ &{\rm (ii)}& \lim_{x \to -\infty}f_i=0,~\lim_{x \to +\infty}f_i<\infty, \label{cd2} \\ &{\rm (iii)}& \left[ D(p)\frac{\partial f_i}{\partial x} \right]_{x=+0}^{x=-0}=\frac{1}{3}(u_+-u_-)p \left. \frac{\partial f_i}{\partial p} \right| _{x=0}, \label{cd3} \end{eqnarray} where (iii) comes from the integration of Eq.(\ref{transport}) across the shock front. This condition yields the differential equation for the distribution function at the shock front $f_{i,0}(p)\equiv f_i(x=0,p)$ with respect of $p$ (see Appendix A). The detail of the general solution of Eq. (\ref{transport}) is presented in the Appendix A. In the following subsections we briefly describe the properties of the derived distribution functions of pions and muons. \subsection{pion acceleration} In this subsection we show the pion distribution function evaluated from Eq.(\ref{transport}). Pions are produced through the interactions between the protons accelerated in the shock and the ambient photons. The distribution of protons is given by \begin{eqnarray} f_p(x,p)=\left \{ \begin{array}{ll} f_{p,0}(p)\exp [ x u_-/D(p) ] & (x\leq 0), \\ f_{p,0}(p) & (x>0), \label{protondis} \\ \end{array} \right. \end{eqnarray} where $f_{p,0}(p)$ is the proton distribution function at the shock front, which is proportional to $\sim p^{-\gamma}$. This expression (\ref{protondis}) is a well-known solution of the convection-diffusion equation (\ref{transport}) \cite{blandfordeichler87, malkovdrury01, drury83}. For simplicity we assume that the ambient photon field is uniform. Since pions are produced from shock-accelerated protons, the production spectrum at the source of pions $Q_{\pi}$ is proportional to that of primary protons, which can be described as \begin{eqnarray} Q_{\pi}(x,p)=\left \{ \begin{array}{ll} Q_{\pi,0}(p)\exp \left[ xu_-/D(p_{\rm p}) \right] & (x \leq 0),\\ Q_{\pi,0}(p) & (x>0),\\ \end{array} \right. \label{pioninj} \end{eqnarray} where $p_{\rm p}$ is the momentum of a primary proton generating a secondary pion with a momentum $p$, and these momenta are approximately related in a linear way: \begin{eqnarray} p\approx \xi_{\pi}p_{\rm p}, \end{eqnarray} where $\xi_{\pi}\approx 0.2$ is the ratio of the energy of a pion to that of a primary proton \cite{waxmanbahcall97}. We can solve Eq. (\ref{transport}) for pions by using Eqs. (\ref{solution}), (\ref{gipm}), and (\ref{hipm}) with $Q_{\pi}(x,p)$ in Eq. (\ref{pioninj}), and obtain the pion distribution functions in the upstream $f_{\pi,-}(x,p)$ and downstream $f_{\pi,+}(x,p)$ as \begin{eqnarray} f_{\pi,-}&=&\left[f_{\pi,0}-\frac{DQ_{\pi,0}}{D/\tau_{\pi}+(\xi_{\pi}-\xi_{\pi}^2)u_-^2}\right] \exp\left( \frac{\sqrt{u_-^2+4D/\tau_{\pi}}+u_-}{2D}x \right)+\frac{DQ_{\pi,0}}{D/\tau_{\pi}+(\xi_{\pi}-\xi_{\pi}^2)u_-^2}\exp \left( \frac{\xi_{\pi}u_-}{D}x\right), \label{pionus} \\ f_{\pi,+}&=&\left( f_{\pi,0}-Q_{\pi,0}\tau_{\pi}\right) \exp \left(-\frac{\sqrt{u_+^2+4D/\tau_{\pi}}-u_+}{2D}x \right) +Q_{\pi,0}\tau_{\pi}. \label{pionds} \end{eqnarray} The pion distribution functions at the shock front $f_{\pi,0}\equiv f_{\pi}(x=0,p)$ can be evaluated from Eqs. (\ref{ai}), (\ref{gip}) and (\ref{fizeroint}) as \begin{eqnarray} f_{\pi,0}(p)=\gamma B_{\pi}\int_0^p \frac{dp^{\prime}}{p^{\prime}}\left( \frac{p^{\prime}}{p} \right)^{\gamma A_{\pi}}\frac{D(p^{\prime})Q_{\pi,0}(p^{\prime})}{u_-^2}, \label{fpizero} \end{eqnarray} where $A_{\pi}$ and $B_{\pi}$ are numerical factors, being independent of $p$ (since both $D$ and $\tau_{\pi}$ are proportional to $p$): \begin{eqnarray} A_{\pi}&=&\frac{1}{2}\left[ \left( \sqrt{1+\frac{4D}{\tau_{\pi} u_-^2}}+1\right)+\left( \sqrt{\frac{1}{\sigma^2}+\frac{4D}{\tau_{\pi} u_-^2}}-\frac{1}{\sigma} \right) \right], \\ B_{\pi}&=&\frac{2}{\sqrt{1+4D/\tau_{\pi}u_-^2}-(1-2\xi_{\pi})} +\frac{2\sigma}{\sqrt{1+4D/\tau_{\pi}u_+^2}+1}, \end{eqnarray} where $\sigma=u_-/u_+$ is the compression ratio. We can see that the distribution function of pions at the shock front, Eq.~(\ref{fpizero}), becomes harder than their production spectrum $Q_{\pi,0}$ by $p^1$ [$\propto D(p)$], and this is similar to Eq.(6) of \cite{blasi09}, where the acceleration of secondary positrons produced in the supernova remnant shock is discussed. The difference is that we take into account the decay of particles, the third term of the right-hand side of Eq.~(\ref{transport}), in the convection-diffusion equation of secondary particles, which is reflected as numerical factors $A_{\pi}$ and $B_{\pi}$. To see the effects of the transport and acceleration of pions in the downstream region from Eq. (\ref{pionds}), we divide $f_{\pi,+}(x,p)$ into two components: $f_{\pi,{\rm acc}}(x,p)$ and $f_{\pi,{\rm nonacc}}(x,p)$. The former component $f_{\pi,{\rm acc}}$ represents the pions that are reaccelerated at the shock, being proportional to $D(p)Q_{\pi,0}(p)$. On the other hand, the latter component $f_{\pi,{\rm nonacc}}$ represents the pions that are produced from the protons and advected in the downstream region, being proportional to $Q_{\pi,0}(p)\tau_{\pi}$: \begin{eqnarray} f_{\pi,{\rm acc}}(x,p)&=&f_{\pi,0}(p)\exp \left( -\frac{\sqrt{u_+^2+4D/\tau_{\pi}}-u_+}{2D}x \right), \label{piacc} \\ f_{\pi,{\rm nonacc}}(x,p)&=&Q_{\pi,0}(p)\tau_{\pi}\left[ 1-\exp \left( -\frac{\sqrt{u_+^2+4D/\tau_{\pi}}-u_+}{2D}x \right) \right]. \label{piadv} \end{eqnarray} Hereafter, since the number of upstream pions, $\int_{x<0}dx^3 f_{\pi,-}(x,p)$, is subdominant compared to that of downstream pions, $\int_{x>0}dx^3 f_{\pi,+}(x,p)$, we only discuss the contribution of the downstream pions to the neutrino spectra. In the limit of $D/\tau_{\pi} u_-^2\rightarrow 0$ (i.e., the lifetime of a pion is much longer than the acceleration timescale, $t_{\rm acc}\equiv D/u_-^2$), Eq.(\ref{fpizero}) has the same form as Eq.(6) of \cite{blasi09}. In this limit, we have $A_{\pi}\approx 1$, $B_{\pi}\approx \xi_{\pi}^{-1}+\sigma$. Therefore, the distribution function at the shock front $f_{\pi,0}$ in Eq.(\ref{fpizero}) can be approximated as \begin{eqnarray} f_{\pi,0}\simeq \frac{\gamma}{\gamma -\alpha +1}\left( \frac{1}{\xi_{\pi}}+ \sigma \right) \frac{D(p)Q_{\pi,0}(p)}{u_-^2} \label{fpizeroapp}, \end{eqnarray} where $\alpha$ is the power-law index of the production spectrum, $Q_{\pi,0}(p)\propto p^{-\alpha}$, and $\alpha\approx 4$ in the strong shock limit with an adiabatic index $5/3$. We can see that, since we assume $D(p)\propto p$, the resulting spectrum is proportional to $p^{\alpha +1}$, being harder than the production spectrum. This can be interpreted as the result of the secondary-acceleration: since pions produced from shock-accelerated protons can cross the shock front before their decay, they would gain the energy and their spectrum would become harder. We should also note that $f_{\pi,0}$ is proportional to $t_{\rm acc}Q_{\pi,0}$. In this limit of $D/\tau_{\pi} u_-^2\rightarrow 0$, the pion distribution function in the downstream region (\ref{pionds}) can be approximated as \begin{eqnarray} f_{\pi,+}\simeq f_{\pi,0}(p)\exp \left( -\frac{x}{u_+ \tau_{\pi}} \right) +Q_{\pi,0}(p)\tau_{\pi}\left[ 1-\exp \left( -\frac{x}{u_+ \tau_{\pi}} \right) \right], \end{eqnarray} where we use $\sqrt{u_+^2+4D/\tau_{\pi}}-u_+\simeq 2D/(u_+ \tau_{\pi})$. Here the first term corresponds to the pions reaccelerated at the shock, and its damping length scale $u_+ \tau_{\pi}$ is identical to the distance over which a pion is advected with the fluid during its lifetime. The second term represents the pions that are produced from protons in the downstream region and simply advected further downward. On the other hand, in the upstream region, Eq.(\ref{pionus}) can be approximated as \begin{eqnarray} f_{\pi,-}\simeq \left( f_{\pi,0}(p)-\frac{1}{\xi_{\pi}-\xi_{\pi}^2}\frac{D(p)Q_{\pi,0}(p)}{u_-^2} \right) \exp \left( \frac{u_-}{D}x \right) +\frac{1}{\xi_{\pi}-\xi_{\pi}^2}\frac{D(p)Q_{\pi,0}(p)}{u_-^2} \exp \left( \frac{\xi_{\pi}u_-}{D}x \right), \end{eqnarray} where we use $\sqrt{u_-^2+4D/\tau_{\pi}}+u_-\simeq 2u_-$. \subsection{muon acceleration} Using the results of the last subsection, we can evaluate the distribution function of muons that are produced from the decay of pions. The production spectrum at the source of muons $Q_{\mu}$ should be given based on the distribution function of pions as follows: \begin{eqnarray} Q_{\mu,\pm}=\frac{f_{\pi,\pm}(p/\xi_{\mu})}{\tau_{\pi}(p/\xi_{\mu})}\frac{dp_{\pi}}{dp}=\frac{1}{\xi_{\mu}}\frac{f_{\pi,\pm}(p/\xi_{\mu})}{\tau_{\pi}(p/\xi_{\mu})}, \label{muoninj} \end{eqnarray} where $\xi_{\mu}\approx 0.75$ is the ratio of the momentum of a muon $p$ to that of a primary pion $p_{\pi}$. Using the method shown in the Appendix A, we can solve the muon convection-diffusion equation and derive $f_{\mu,\pm}(x,p)$. The detail of the solutions is shown in the Appendix B. In the limit of $D/u_-^2 \ll \tau_{\pi},~\tau_{\mu}$ (i.e., the lifetime of a pion is much longer than the acceleration timescale), the muon distribution function at the shock front $f_{\mu,0}(p)$ can be approximated as \begin{eqnarray} f_{\mu,0}\simeq \frac{\gamma \xi_{\mu}^{\alpha-1}}{\gamma-\alpha+1}\left[ \left( \frac{1}{\xi_{\mu}}+\sigma \right) \left( \frac{1}{\xi_{\pi}}+\sigma \right) \frac{\gamma}{\gamma-\alpha+1}+\frac{1}{\xi_{\mu} \xi_{\pi}^2} \right] \left( \frac{D(p)}{u_-^2} \right)^2 \frac{1}{\tau_{\pi}} Q_{\pi,0}(p) \label{fmuzeroapp}, \end{eqnarray} where we assume the power-law spectrum of pion production, $Q_{\pi,0}(p)\propto p^{-\alpha}$. Since $D(p)\propto p$ and $\tau_{\pi}\propto p$, we can see that this spectrum Eq.(\ref{fmuzeroapp}) is proportional to $p^{\alpha+1}$, which is similar to the pion distribution function at the shock $f_{\pi,0}$ shown in Eq. (\ref{fpizeroapp}). This can be interpreted as follows. As stated in Eq.(\ref{muoninj}), the production spectrum of muons is proportional to $f_{\pi}/\tau_{\pi}$. Since the lifetime of a pion $\tau_{\pi}$ is proportional to $p$, the production spectrum at the shock is proportional to $Q_{\mu,0}\simeq f_{\pi,0}/\tau_{\pi}\sim p^{-\alpha+1}/p=p^{-\alpha}$. Injected muons are reaccelerated at the shock and, according to the similar discussion to that on pions, the muon spectrum at the shock would become harder than their injected spectrum by $p^1$, which comes from the dependence of $D(p)$ on $p$. We should also note that $f_{\mu,0}$ is proportional to $(t_{\rm acc}/\tau_{\pi})t_{\rm acc}Q_{\pi,0}$. In a similar way to the pion distribution function, the muon distribution function in the downstream $f_{\mu,+}(x,p)$ can be divided into two components, $f_{\mu,{\rm acc}}(x,p)$ and $f_{\mu,{\rm nonacc}}(x,p)$ as follows: \begin{eqnarray} f_{\mu,{\rm acc}}(x,p)&=&f_{\mu,0}\exp \left( -\frac{\sqrt{u_+^2+4D/\tau_{\mu}}-u_+}{2D}x \right), \label{muacc} \\ f_{\mu,{\rm nonacc}}(x,p)&=&f_{\mu,+}(x,p)-f_{\mu,0}\exp \left( -\frac{\sqrt{u_+^2+4D/\tau_{\mu}}-u_+}{2D}x \right), \label{muadv} \end{eqnarray} and, as in the case of pions, the number of upstream muons, $\int_{x<0}dx^3f_{\mu,-}(x,p)$ is subdominant compared to that of downstream muons $\int_{x>0}dx^3f_{\mu,+}(x,p)$. \subsection{neutrino spectra} From the pion and muon distribution functions calculated above, the neutrino spectrum can be obtained as follows: \begin{eqnarray} \Phi_{\nu_{\mu}}^0 (p)&=& \int dx^3 \frac{4\pi p^2}{\xi_{\nu_{\mu}}}\frac{f_{\pi}(x, p/\xi_{\nu_{\mu}})}{\tau_{\pi}(p/\xi_{\nu_{\mu}})}, \label{numu} \\ \Phi_{\bar{\nu}_{\mu}}^0 (p)&=& \int dx^3 \frac{4\pi p^2}{\xi_{\bar{\nu}_{\mu}}} \frac{f_{\mu}(x,p/\xi_{\bar{\nu}_{\mu}})}{\tau_{\mu}(p/\xi_{\bar{\nu}_{\mu}})}, \label{numubar} \\ \Phi_{\nu_e}^0 (p)&=&\int dx^3 \frac{4\pi p^2}{\xi_{\nu_e}}\frac{f_{\mu}(x,p/{\xi_{\nu_e}})}{\tau_{\mu}(p/\xi_{\nu_e})}, \label{nue} \end{eqnarray} where $\xi_{\nu_{\mu}}$, $\xi_{\bar{\nu}_{\mu}}$ and $\xi_{\nu_e}$ are the ratios of the energy of a muon neutrino, an anti-muon neutrino, and an electron neutrino to that of their primary particles, respectively. Since each lepton produced from the decay of a pion ($e$, $\nu_{\mu}$, $\bar{\nu}_{\mu}$ and $\nu_e$) carries approximately equal energy (i.e., 1/4 of that of the primary pion), we set $\xi_{\nu_{\mu}}\approx 0.25$, $\xi_{\bar{\nu}_{\mu}}\approx 0.33$ and $\xi_{\nu_e}\approx 0.33$. The volume integral should contain the surface area integral on the shocked matter plus the integral along the normal direction of the shock. Especially, defining the dynamical timescale $t_{\rm dyn}$ as time for the shock to cross the system, the latter integral should be from $x\approx -\beta_-ct_{\rm dyn}$ to $x\approx \beta_+ct_{\rm dyn}$. We divide the neutrino energy spectrum into two components according to the decomposition of the pion/muon distribution functions shown in Eqs. (\ref{piacc}), (\ref{piadv}), (\ref{muacc}) and (\ref{muadv}): \begin{eqnarray} \Phi_{\nu_{\mu},{\rm acc}}^0 (p)&=& \int dx^3 \frac{4\pi p^2}{\xi_{\nu_{\mu}}}\frac{f_{\pi,{\rm acc}}(x,p/\xi_{\nu_{\mu}})}{\tau_{\pi}(p/\xi_{\nu_{\mu}})}, \\ \Phi_{\nu_{\mu},{\rm nonacc}}^0 (p)&=&\int dx^3 \frac{4\pi p^2}{\xi_{\nu_{\mu}}}\frac{f_{\pi,{\rm nonacc}}(x,p/\xi_{\nu_{\mu}})}{\tau_{\pi}(p/\xi_{\nu_{\mu}})}, \end{eqnarray} and $\Phi_{\bar{\nu}_{\mu},{\rm acc/nonacc}}^0$ and $\Phi_{\nu_e,{\rm acc/nonacc}}^0$ are defined in similar ways. We should also consider neutrino oscillations during the propagation from the source to the Earth. When neutrinos propagate over the distances much longer than $\sim \hbar c \epsilon_{\nu}/\Delta m^2 c^4$ ($\Delta m^2$ is the squared mass difference: $\Delta m_{12}^2\simeq 8.0\times 10^{-5}~{\rm eV}^2$, $|\Delta m_{23}^2|\simeq 2.5\times 10^{-3}~{\rm eV}^2$), the observed fluxes of neutrinos $\Phi_{\nu_x}$ ($x=e, \mu, \tau$) should be described as \begin{eqnarray} \Phi_{\nu_x}=\sum_y P_{xy}\Phi_{\nu_y}^0=\sum_y \sum_i \left| U_{xi} \right| ^2 \left| U_{yi} \right| ^2 \Phi_{\nu_y}^0, \label{mixing} \end{eqnarray} where $U_{xi}$ is the neutrino mixing matrix and the subscript $i$ represents the mass eigenstate of neutrinos. The matrix elements of $U_{xi}$ can be described by the mixing angles $\theta_{12}$, $\theta_{23}$, and $\theta_{31}$, and the Dirac phase $\delta$. Based on \cite{fogli+12}, we adopt $\sin ^2\theta_{12}\simeq 0.31$, $\sin ^2\theta_{23}\simeq 0.39$, $\sin ^2\theta_{31}\simeq 0.024$, and $\delta\simeq 1.1\pi$. \section{Applications to Low-Power GRBs} Now we consider long GRBs as neutrino sources. GRBs are thought to produce high-energy neutrinos \cite{waxmanbahcall97}. In the standard model, the emission of long GRBs is believed to be produced by relativistic jets launched when a massive star collapses and a stellar-mass black hole is formed. In order for the jet to be observed as a GRB, it should penetrate the stellar envelope, otherwise the jet would stall inside the star and the gamma-ray emission would not be observed \cite{reesmeszaros94}. Their prompt emission is often interpreted as synchrotron emission from non-thermal electrons accelerated at internal shocks. It is natural to consider the proton acceleration and the associated production of high energy neutrinos via $pp$/$p\gamma$ interactions \cite{waxmanbahcall97}. However, IceCube gave stringent upper limits on GRBs \cite{abbasi+11, abbasi+12} and has ruled out the typical long GRBs as the main source of the observed diffuse neutrino events \cite{hummer+12, he+12, gao+13}. Instead of ordinary GRBs, we investigate high-energy neutrino production by low-power GRBs such as low-luminosity GRBs (LLGRBs) and ultralong GRBs (ULGRBs), which are still not strongly constrained by IceCube. In these low-power GRBs the high energy neutrinos may be produced inside the progenitor star \cite{meszaroswaxman01, razzaque+04, horiuchiando08}. While a jet is penetrating in a stellar envelope, it becomes slow and cylindrical by passing through the collimation shock. The internal shocks would also occur when there is spatial inhomogeneity in a jet. Murase \& Ioka \cite{muraseioka13} recently investigated such high energy neutrino production expected from LLGRBs \cite{soderberg+06} and ULGRBs \cite{gendre+13, levan+14}, which have longer durations ($\sim 10^3-10^4~{\rm s}$) and lower luminosity ($L_{\gamma}\sim 10^{46}-10^{50}~{\rm erg}~{\rm s}^{-1}$) compared to those of typical long GRBs. It has been suggested that ultra-long GRBs have bigger progenitors like blue supergiants (BSGs) with radii of $\sim 10^{12}-10^{13}~{\rm cm}$ \cite{suwaioka11, kashiyama+13, nakauchi+13}. We apply our model of neutrino production in such GRB jets inside stars, taking into account the secondary-acceleration and decay of pions/muons that are produced by shock-accelerated protons via $p\gamma$ interactions. The internal shocks of GRBs are considered to be mildly-relativistic in the shock rest frame. Let us evaluate the important timescales in our model by considering the internal shock scenario of GRBs. When two moving shells are ejected with comparable Lorentz factors of order of $\Gamma$ from the central engine during the time separation $\Delta t$, these shells collide and make an internal shock at the radius $r \sim r_{\rm is} \sim \Gamma ^2 c\Delta t$. Here the magnetic field energy density can be estimated as $U_B\equiv L_B/(4\pi r_{\rm is}^2 c \Gamma^2)$, where $L_B$ is the magnetic luminosity. Then we can estimate the acceleration timescale $t_{\rm acc}$, synchrotron cooling timescale $t_{i,{\rm syn}}$ as functions of the energy of a particle, and the dynamical timescale $t_{\rm dyn}$ at the shock rest frame as follows: \begin{eqnarray} t_{\rm acc}&=&\frac{D(p)}{u_-^2}=\frac{\eta \varepsilon_i}{3ceB \beta_-^2} \nonumber \\ &\simeq &4.4\times 10^{-5}~{\rm s}~\frac{\eta \varepsilon_{i, 100{\rm TeV}} \Gamma_2^3 \Delta t_{\rm ms}}{L_{B,47}^{1/2} \beta_-^2}, \\ t_{\pi, {\rm syn}}&=&\frac{9m_{\pi}^4 c^7}{4e^4 B^2 \varepsilon_{\pi}} \nonumber \\ &\simeq&3.0~{\rm s}~\frac{\Gamma_2^6 \Delta t_{\rm ms}^2}{L_{B,47}\varepsilon_{\pi,100{\rm TeV}}}, \\ t_{\mu, {\rm syn}}&=&\frac{9m_{\mu}^4 c^7}{4e^4 B^2 \varepsilon_{\mu}} \nonumber \\ &\simeq& 0.99~{\rm s}~\frac{\Gamma_2^6 \Delta t_{\rm ms}^2}{L_{B,47}\varepsilon_{\mu,100{\rm TeV}}}, \\ t_{\rm dyn}&=&\frac{r_{\rm is}}{\beta_- c\Gamma} \nonumber \\ &=&0.10~{\rm s}~\Gamma_2\Delta t_{\rm ms}\beta_-^{-1}, \end{eqnarray} where $\varepsilon_i=100~{\rm TeV}~\varepsilon_{i, 100~{\rm TeV}}$ is the energy of a particle $i$ ($i=\pi$ or $\mu$) at the shock rest frame, $\Gamma_2=\Gamma/10^2$, $\Delta t_{\rm ms}=\Delta t/(10^{-3}~{\rm s})$, $L_{47}=L_B/10^{47}~{\rm erg}~{\rm s}^{-1}$, $m_{\pi}\approx 140~{\rm MeV}$ and $m_{\mu}\approx 106~{\rm MeV}$ are the masses of a charged pion and a muon, respectively. From Eqs. (\ref{pionlife}) and (\ref{muonlife}), a pion can be accelerated at the source before its decay when $t_{\rm acc}<\tau_{\pi}$, i.e., \begin{eqnarray} \frac{\eta\Gamma_2^3 \Delta t_{\rm ms}}{L_{B,47}^{1/2}\beta_-^2}\lesssim 4.4\times 10^2, \end{eqnarray} while a muon can be accelerated before its decay when \begin{eqnarray} \frac{\eta\Gamma_2^3 \Delta t_{\rm ms}}{L_{B,47}^{1/2}\beta_-^2}\lesssim 4.7\times 10^4. \end{eqnarray} Note that, since both $t_{\rm acc}$ and $\tau_{\pi}$ ($\tau_{\mu}$) are proportional to the energy of a pion (a muon), these conditions are independent of the energy of particles. Under these conditions, pions (muons) can be accelerated at the shock before they decay and therefore their spectra would become harder. On the other hand, we can see that the synchrotron cooling timescale would be shorter than the acceleration timescale when the energy $\varepsilon_i$ in the shock rest frame is higher than $\varepsilon_{i, 0}$, where \begin{eqnarray} \varepsilon_{\pi,0}\simeq 2.7\times 10^{16}~{\rm eV}\frac{\Gamma_2^{3/2} \Delta t_{\rm ms}^{1/2}\beta_-}{L_{B,47}^{1/4}\eta^{1/2}}, \label{epizero} \\ \varepsilon_{\mu,0}\simeq 1.9\times 10^{16}~{\rm eV}\frac{\Gamma_2^{3/2} \Delta t_{\rm ms}^{1/2}\beta_-}{L_{B,47}^{1/4}\eta^{1/2}}. \label{emuzero} \end{eqnarray} In order to evaluate the timescales of inverse Compton cooling and $p\gamma$ interaction, we should give the target photon spectrum at the local rest frame. In the case of the internal shock occurring inside a star, the accelerated particles mainly interact with photons that are produced in the jet head and escape back from there. Here we estimate the spectrum of the target photon field according to the procedure adopted in \cite{muraseioka13}. At the head of the collimated jet, the photon temperature $T_{\rm cj}$ is given as \begin{eqnarray} k_{\rm B}T_{\rm cj}&\approx&k_{\rm B}\left( \frac{L}{4\pi r_{\rm cs}^2 \Gamma_{\rm cj}^2 \cdot 4\sigma_{\rm SB}} \right)^{1/4} \nonumber \\ &\approx &0.52~{\rm keV} \epsilon_{B,-2}^{-1}L_{B,47}^{1/4}r_{{\rm cs},11.5}^{-1/2}(\Gamma_{\rm cj}/5)^{-1/2}, \end{eqnarray} where $L=L_B/\epsilon_B$ is the total jet luminosity ($\epsilon_B=0.01\epsilon_{B,-2}$ is the fraction of the magnetic energy), $k_{\rm B}$ is the Boltzmann constant, $\sigma_{\rm SB}$ is the Stefan-Boltzmann constant, $r_{\rm cs}$ is the radius where a jet becomes cylindrical through the collimation shock, and $\Gamma_{\rm cj}$ is the Lorentz factor of the collimated jet (note that this is different from the Lorentz factor of the precollimated jet $\Gamma$). The fraction of photons escaping the collimated jet is $f_{\rm esc}\approx (n_{\rm cj}\sigma_T r_{\rm cs}/\Gamma_{\rm cj})^{-1}$ where $n_{\rm cj}\approx L/(4\pi r_{\rm cs}^2\Gamma_{\rm cj}\Gamma m_p c^3)$ is the comoving proton number density in the collimated jet, and $\sigma_T$ is the Thomson cross section. Therefore, the number density of the target photons is given as \begin{eqnarray} n_{\gamma}^j&\approx &\frac{\Gamma}{2\Gamma_{\rm cj}}f_{\rm esc}n_{\gamma}^{\rm cj} \nonumber \\ &\approx & 9.8\times 10^{21}~{\rm cm}^{-3}~\epsilon_{B,-2}^{-1}L_{B,47}^{-1/4}r_{{\rm cs},11.5}^{-1/2}\Gamma_2^2 (\Gamma_{\rm cj}/5)^{-1/2}, \end{eqnarray} where $n_{\gamma}^{\rm cj}=16\pi \zeta(3)(k_{\rm B}T_{\rm cj})^3/(ch)^3$ is the comoving photon number density in the collimated jet, and $\zeta(n)$ is the Riemann zeta function. We assume that the escaping photon field has a thermal spectrum, \begin{eqnarray} \frac{dn}{d\varepsilon}=\frac{8\pi \varepsilon^2}{c^3 h^3}\frac{1}{e^{\varepsilon/k_{\rm B}T_{\rm eff}}-1}, \end{eqnarray} with the effective temperature of $k_{\rm B}T_{\rm eff}\approx [(\Gamma/2\Gamma_{\rm cj})f_{\rm esc}]^{1/3}k_{\rm B}T_{\rm cj}$. The photomeson production ($p\gamma$ interaction) timescale can be evaluated as \begin{eqnarray} t_{p\gamma}^{-1}=\frac{c}{2\gamma_p^2}\int_{\varepsilon_0}^{\infty}d\varepsilon \sigma_{\pi}(\varepsilon)\xi(\varepsilon)\varepsilon \int_{\varepsilon/2\gamma_p}^{\infty}dx x^{-2} \frac{dn}{dx}, \end{eqnarray} where $\gamma_p=\varepsilon_p/(m_p c^2)$, $\sigma_{\pi}(\varepsilon)$ is the cross section of pion production as a function of photon energy $\varepsilon$ in the proton rest frame, $\xi(\varepsilon)$ is the average fraction of energy lost from a proton to a pion, and $\varepsilon_0=0.15~{\rm GeV}$ is the threshold energy \citep{waxmanbahcall97}. In the following discussion, we use the $\Delta$ resonance approximation: $\sigma_{\pi}(\varepsilon)$ is approximated to be a function with a peak at $\varepsilon =\varepsilon_{\rm peak}\sim 0.3~{\rm GeV}$, where $\sigma(\varepsilon_{\rm peak})\simeq 5\times 10^{-28}~{\rm cm}^2$ with the width of $\Delta \varepsilon\simeq 0.2~{\rm GeV}$, and $\xi(\varepsilon_{\rm peak})\equiv \xi_{\pi}\simeq 0.2$. Figures 1 and 2 depict the acceleration timescales, cooling timescales via synchrotron emission and inverse Compton scattering, decay timescales of a pion and a muon, the timescale of $p\gamma$ interactions, and the dynamical timescale ($\equiv r_d/\beta_- c\Gamma$) in the internal shock occurring inside a star expected for an ultralong GRB ($L=10^{49}~{\rm erg}~{\rm s}^{-1}$, $\epsilon_B=0.01$, $\Gamma=80$, $\Delta t=10^{-3}~{\rm s}$, $\beta_-=0.5$). We can see that, with the current choice of parameters, the acceleration timescales of a pion and a muon are shorter than their lifetimes for arbitrary energy range, and that the decay timescale becomes longer than the dynamical timescale above the energy of $\sim {\rm PeV}$ for pions and $\sim 10~{\rm TeV}$ for muons (at the shock rest frame). Note that, since synchrotron cooling timescale for pions and muons becomes shorter than acceleration timescale when the energy of particles is larger than $\sim 10~{\rm PeV}$, our formalism is not applicable in the energy range above $\sim 10~{\rm PeV}$. Note also that the efficiencies of pion/muon production would be suppressed in the energy range where the timescale of $p\gamma$ interactions is comparable ($10^{14}~{\rm eV}\lesssim \varepsilon_i \lesssim 10^{15}~{\rm eV}$) or shorter than the acceleration timescale. In the current work, this effect is not taken into account. Figure 3 depicts the energy spectra of muon neutrinos and electron neutrinos expected from the internal shock inside a progenitor of an ultralong GRB. The flux from reaccelerated pions and muons, Eqs. (\ref{piacc}) and (\ref{muacc}), and that from advected pions and muons, Eqs. (\ref{piadv}) and (\ref{muadv}), are also shown (with dotted lines and dashed lines, respectively). We can see that the electron neutrino flux from advected muons drops above the energy of $\sim 100~{\rm TeV}$ in the observer frame ($\sim {\rm TeV}$ at the shock rest frame). This corresponds to the energy at which the muon decay timescale is equal to the dynamical timescale. Above this energy, only part of muons can decay into $\nu_e$ within the dynamical timescale \cite{muonadcool}. As for the muon neutrino flux from advected particles, it slightly drops around the energy where the electron neutrino flux drops because anti-muon neutrinos $\bar{\nu}_{\mu}$ are generated from the decay of muons $\mu^+$, and drops again at the energy where the pion decay timescale is equal to the dynamical timescale ($\varepsilon_{\pi}\sim 0.1~{\rm PeV}$ for the current parameter set) because muon neutrinos $\nu_{\mu}$ are generated from the decay of pions $\pi^+$. We can interpret this behavior as follows. From Eqs. (\ref{piadv}), (\ref{muadv}), (\ref{numu}), (\ref{numubar}) and (\ref{nue}), in the limit of $t_{\rm acc} \ll \tau_{\pi},\tau_{\mu}$ and $t_{\rm dyn} \gg \tau_{\pi},\tau_{\mu}$, the neutrino fluxes from advected particles can be approximated as \begin{eqnarray} \Phi_{\nu_e,{\rm nonacc}}^0(p)&\simeq &V\cdot 4\pi p^2 \xi_{\mu}^{-1}Q_{\pi,0}(p/\xi_{\mu}\xi_{\nu_e}), \label{nuenonacclow}\\ \Phi_{\nu_{\mu},{\rm nonacc}}^0(p)+\Phi_{\nu_{\bar{\nu}_{\mu}},{\rm nonacc}}^0(p)&\simeq& V\cdot 4\pi p^2 \left[ Q_{\pi,0}(p/\xi_{\nu_{\mu}}) + \xi_{\mu}^{-1}Q_{\pi,0}(p/\xi_{\mu}\xi_{\bar{\nu}_{\mu}}) \right], \label{numunonacclow} \end{eqnarray} while in the limit of $t_{\rm acc} \ll \tau_{\pi},\tau_{\mu}$ and $t_{\rm dyn} \ll \tau_{\pi},\tau_{\mu}$ they can be approximated as \begin{eqnarray} \Phi_{\nu_e,{\rm nonacc}}^0(p)&\simeq&V\cdot 4\pi p^2 \xi_{\mu}^{-1}(t_{\rm dyn}/\tau_{\mu})Q_{\pi,0}(p/\xi_{\mu}\xi_{\nu_e}), \label{nuenonacchigh}\\ \Phi_{\nu_{\mu},{\rm nonacc}}^0(p)+\Phi_{{\bar{\nu}_{\mu}},{\rm nonacc}}^0(p)&\simeq & V\cdot 4\pi p^2 t_{\rm dyn}\left[ \tau_{\pi}^{-1}Q_{\pi,0}(p/\xi_{\nu_{\mu}})+(\xi_{\mu}\tau_{\mu})^{-1}Q_{\pi,0}(p/\xi_{\mu}\xi_{\bar{\nu}_{\mu}}) \right], \label{numunonacchigh} \end{eqnarray} where $V$ is the volume of the merged shell making the internal shock. Here we neglect the contribution from pions/muons in the upstream region ($f_{\pi / \mu,-}(x,p)$) because it is subdominant compared to that from the downstream pions/muons. We can easily see that in the latter limit $t_{\rm dyn} \ll \tau_{\pi}, \tau_{\mu}$, the energy spectra of neutrino fluxes are softer than $Q_{\pi,0}$ by $p^1$ because the decay timescale $\tau_i$ is proportional to $p$. On the other hand, the neutrino fluxes from reaccelerated pions/muons increase more as the acceleration timescales become longer. Under the condition $t_{\rm acc} \ll \tau_{\pi}, \tau_{\mu}$, from Eqs. (\ref{piacc}), (\ref{muacc}), (\ref{numu}), (\ref{numubar}) and (\ref{nue}), we can approximate the neutrino fluxes from reaccelerated pions/muons as \begin{eqnarray} \Phi_{\nu_e,{\rm acc}}^0(p)&\simeq& S u_+ 4\pi p^2 \xi_{\nu_e}^{-1}f_{\mu,0}(p/\xi_{\nu_e}) \left\{ 1- \exp \left( -\frac{\xi_{\nu_e}r_d/\Gamma}{u_+\tau_{\mu}} \right) \right\}, \\ \Phi_{\nu_{\mu},{\rm acc}}^0(p)+\Phi_{\bar{\nu}_{\mu},{\rm acc}}^0(p)&\simeq& S u_+ 4\pi p^2 \left[ \xi_{\nu_{\mu}}^{-1}f_{\pi,0}(p/\xi_{\nu_{\mu}}) \left\{ 1 - \exp \left( -\frac{\xi_{\nu_{\mu}}r_d/\Gamma}{u_+\tau_{\pi}} \right) \right\} \right. \nonumber \\ &&\left. + \xi_{\bar{\nu}_{\mu}}^{-1} f_{\mu,0}(p/\xi_{\bar{\nu}_{\mu}}) \left\{ 1 -\exp \left( -\frac{\xi_{\bar{\nu}_{\mu}}r_d/\Gamma}{u_+\tau_{\mu}} \right) \right\} \right]. \end{eqnarray} In the high energy limit, where the decay timescales of pions/muons are much longer than the dynamical timescale, each of these neutrino fluxes behaves asymptotically as \begin{eqnarray} \Phi_{\nu_e}^0 &\sim &V\cdot \frac{4\pi p^2 f_{\mu,0}(p/\xi_{\nu_e})}{\tau_{\mu}} \propto p^2 Q_{\pi,0} \frac{t_{\rm acc}^2}{\tau_{\pi}\tau_{\mu}}, \label{nuehelimit}\\ \Phi_{\nu_{\mu}}^0 &\sim &V\cdot \frac{4\pi p^2 f_{\pi,0}(p/\xi_{\nu_{\mu}})}{\tau_{\pi}} \propto p^2 Q_{\pi,0} \frac{t_{\rm acc}}{\tau_{\pi}} \label{numuhelimit}, \\ \Phi_{\bar{\nu}_{\mu}}^0 &\sim &V\cdot \frac{4\pi p^2 f_{\mu,0}(p/\xi_{\bar{ \nu}_{\mu}})}{\tau_{\mu}} \propto p^2 Q_{\pi,0} \frac{t_{\rm acc}^2}{\tau_{\pi}\tau_{\mu}} \label{numubarhelimit}, \end{eqnarray} where we use the definition $t_{\rm acc}=D(p)/u_-^2$ and the approximate expressions, Eqs. (\ref{fpizeroapp}) and (\ref{fmuzeroapp}). Figure 4 depicts the neutrino flavor ratios as functions of energy expected from the internal shock of ultralong GRBs occurring inside progenitors. In addition to the plot for the parameter set used in the previous figures (solid line), we show the ratio in the case with longer acceleration timescale for comparison (dashed line). In the usual case, the flavor ratio expected from the photomeson process is $\Phi_{\nu_e}^0:\Phi_{\nu_{\mu}}^0:\Phi_{\nu_{\tau}}^0=1:2:0$, being independent of energy. However, when the decay timescale of a muon becomes longer than the dynamical timescale, the flavor ratio is modified because the decay timescale of a muon is $\sim 100$ times longer than that of a pion and only the $\nu_e$ flux is reduced. On the other hand, the acceleration of pions and muons also modifies the flavor ratio, and dominates the neutrino fluxes when the acceleration timescale becomes comparable to the dynamical timescale. The flavor ratio becomes constant in the high energy limit. We can explain this behavior from Eqs. (\ref{fpizeroapp}), (\ref{fmuzeroapp}), (\ref{nuehelimit}), (\ref{numuhelimit}) and (\ref{numubarhelimit}): the ratio $\Phi_{\nu_{\mu}}^0/\Phi_{\nu_e}^0$ is determined only by the ratio between the acceleration timescale $t_{\rm acc}$ and the decay timescale of a muon $\tau_{\mu}$, which is independent of momentum $p$. More explicitly, when assuming a strong shock ($\sigma=4$, $\gamma=4$) with $Q_{\pi,0}(p)$ being proportional to $p^{-4}$ (i.e., $\alpha=4$), we can describe the flavor ratio at the source in the high energy limit as \begin{eqnarray} \frac{\Phi_{\nu_{\mu}}^0+\Phi_{\bar{\nu}_{\mu}}^0}{\Phi_{\nu_e}^0}&\simeq&\frac{\xi_{\nu_e}^{\alpha-2}\xi_{\mu}^{\alpha-1}\left[ \left( \frac{1}{\xi_{\mu}}+\sigma \right) \left( \frac{1}{\xi_{\pi}}+\sigma \right) \frac{\gamma}{\gamma-\alpha+1}+\frac{1}{\xi_{\mu}\xi_{\pi}^2} \right] }{\xi_{\nu_{\mu}}^{\alpha-1}\left( \frac{1}{\xi_{\pi}}+\sigma \right) }\frac{\tau_{\mu}}{t_{\rm acc}}+1 \nonumber \\ &\simeq& 0.022\frac{\tau_{\mu}}{t_{\rm acc}}+1. \label{ratiosource} \end{eqnarray} This ratio diverges in the limit of $\tau_{\mu}/t_{\rm acc} \rightarrow \infty$, which means that the flavor ratio at the source, $\Phi_{\nu_e}^0:\Phi_{\nu_{\mu}}^0:\Phi_{\nu_{\tau}}^0$ approaches $0:1:0$. Interestingly, we may be able to infer the particle-acceleration timescale from the neutrino flavor ratio. By using Eq. (\ref{mixing}), we can evaluate the neutrino flavor ratio that would be observed at the Earth, as shown in Figure 5. Similar to the flavor ratio at the source, the observed ratio is modified above the energy where the decay timescale of a muon becomes longer than the dynamical timescale and is nearly constant in the high energy range. The flavor transition occurs over $\sim 2$ decades in energy. We can easily show that, in the limit of $\tau_{\mu}/t_{\rm acc} \rightarrow \infty$, the observed flavor ratio $\Phi_{\nu_e}:\Phi_{\nu_{\mu}}:\Phi_{\nu_{\tau}}$ at high energy range converges to $\simeq 1:1.8:1.8$. This ratio is identical to that shown in \cite{kashtiwaxman05}, in which they investigated the effects of synchrotron cooling of pions/muons before their decay on the neutrino flavor ratio. In their work, as in our current study, the neutrino flavor ratio at the source is $0:1:0$ at high energy, but the reason is different. In the model of \cite{kashtiwaxman05}, since the lifetime of a muon is longer than that of a pion, muons would suffer from synchrotron cooling more than pions. As a result, the flux of electron neutrinos, that are produced from muons, would be suppressed compared to the flux of muon neutrinos, that are produced from pions. Therefore, in the high energy range where the synchrotron cooling timescale is much shorter than the lifetime of a muon, the flavor ratio at the source can be approximated as $\simeq 0:1:0$. In our study, we show that the flavor ratio would be also modified by the secondary-acceleration because pions decay more than muons during the secondary-acceleration, if the acceleration timescale of a pion/muon is shorter than their lifetimes. On the contrary to the cooling case in which the flavor modification is associated with the spectral softening, the neutrino spectra are flat in the high energy range when pions and muons are reaccelerated. This is because the secondary-acceleration makes the spectra of primary pions/muons harder by $p^1$ [$\propto D(p)$ in Eqs,~(\ref{fpizeroapp}) and (\ref{fmuzeroapp})], while the neutrino spectra is softer than the primary spectra by $p^1$, which is proportional to the decay timescales of primary particles. As a result, the neutrino spectra are flat, having the same spectral indices with those of injected primary mesons. Therefore, even when the observed flavor ratio of neutrinos in the high energy range converges to $1:1.8:1.8$, we can discriminate which process modifies the flavor ratio, cooling or secondary-acceleration, by observing their energy spectra. We should note that the flat part of neutrino spectra would have a cutoff at the energy where the acceleration timescale is equal to the dynamical timescale because above that energy pions and muons would suffer from adiabatic cooling, which is not included in our formulation (see discussion in Sec. 4). \section{Discussion and Conclusion} We investigate the shock acceleration of pions/muons produced by primary protons that are accelerated at the shock, and its effects on the observed neutrino flavor ratios. We solve the convection-diffusion equation of pions/muons around a shock taking secondary-acceleration and decay into account, and compute the high energy neutrino spectra from their decay as well as the energy dependence of the neutrino flavor ratio $\Phi_{\nu_e}:\Phi_{\nu_{\mu}}:\Phi_{\nu_{\tau}}$. We find the following: 1. When the acceleration timescale is shorter than the decay timescales of a pion and a muon, pions and muons are accelerated at the shock before they decay. The resulting distribution function of pions/muons would be divided into two components: the component accelerated at the shock and the component advected to the downstream after production from protons. The neutrino spectrum of the former component is flat in the high energy range where the acceleration timescale becomes comparable to the dynamical timescale of the system. 2. The flavor ratio of neutrinos at the source, $\Phi_{\nu_e}^0:\Phi_{\nu_{\mu}}^0:\Phi_{\nu_{\tau}}^0$, would deviate from $1:2:0$, which is expected from photomeson interactions, and approaches to $0:1:0$ above the energy at which the decay timescale of a muon becomes longer than the dynamical timescale of the shock because only the $\nu_e$ flux is reduced. The transition width of the observed flavor ratio is $\sim 2$ decades in energy (Fig. 5), which is wider than that in the case of the flavor ratio modification by the radiative cooling. Although such a flavor ratio modification by the adiabatic cooling has been suggested in \cite{kashtiwaxman05}, we investigate them using the convection-diffusion equation for the first time. 3. When the secondary-acceleration is efficient, the neutrino fluxes from shock-reaccelerated pions/muons are dominant over the fluxes from non-reaccelerated pions/muons in the high energy range. In this case the flavor ratio would be asymptotically constant (Fig.4). This ratio is determined by the ratio of the lifetime of a muon to its acceleration timescale (see Eq. \ref{ratiosource}). Therefore, from the observed flavor ratio, one can constrain the acceleration timescale of cosmic ray particles. 4. The maximum energy of accelerated particles is determined by the condition $t_{\rm acc}=t_{\rm dyn}$, where the energy spectra of neutrinos become flat. As a result, the secondary-accelerated component appears as a flat excess above the non-reaccelerated component at the highest energy. 5. When the acceleration timescale is shorter than the lifetime of a muon, the flavor ratio at the source approaches to $\Phi_{\nu_e}^0:\Phi_{\nu_{\mu}}^0:\Phi_{\nu_{\tau}}^0 \rightarrow 0:1:0$ in the high energy range, and the observed flavor ratio approaches to $1:1.8:1.8$. This asymptotic ratio is similar to the case where pions/muons are efficiently cooled via synchrotron and/or inverse Compton scattering, but the energy spectra of neutrinos are different: the spectra become flat in the high energy range when the secondary-acceleration is efficient, while the spectra become soft in the high energy range when the synchrotron cooling is efficient. 6. As for the ratio of $\bar{\nu}_e$ to the total $\nu$ flux, when $t_{\rm acc} \ll \tau_{\pi}, \tau_{\mu}$, it is $\sim 1/14$ ($\sim 1/6$) in the low energy range and it approaches to $\sim 0$ ($\sim 1/9$) in the case of $p\gamma$ ($pp$) interactions. The ratio of the $\bar{\nu}_e$ flux to the total $\nu$ flux in the high energy range can be measured from the $\bar{\nu}_e$ interactions at the $6.3~{\rm PeV}$ Glashow resonance. Our formalism presented in Section II can be applied only when the flow speed at the shock rest frame is non-relativistic. In the case when the shock is relativistic, we should use relativistic formulae for shock-acceleration, in which the anisotropies in the angular distribution of accelerated particles are taken into account \cite{achterberg+01}. Recent particle-in-cell (PIC) simulations of relativistic shocks have shown that the efficiency of particle acceleration is controlled by the magnetization, flow velocity, and field direction \cite{sironispitkovsky09, sironispitkovsky11, sironi+13}, and one should take into account these properties when discussing secondary-acceleration in relativistic shocks. These issues would be important in future work. In the calculations above, we neglect the radiative cooling of pions and muons during the shock acceleration. If the energy of pions or muons is higher than $\varepsilon_{i,0}$ in Eqs. (\ref{epizero}) and (\ref{emuzero}), we should consider the synchrotron cooling in deriving the distribution functions of pions and muons. As shown in \cite{kashtiwaxman05}, due to the synchrotron cooling of pions and muons, the observed neutrino flavor ratio, $\Phi_{\nu_e}:\Phi_{\nu_{\mu}}:\Phi_{\nu_{\tau}}$, is modified from $1:1:1$ at low energy to $1:1.8:1.8$ at high energy. In this energy range, the energy spectra of neutrinos are softened. This expectation will be confirmed by solving the pion/muon transport equations with the energy loss term (e.g. \cite{blasi10}). We also neglected the effect of matter oscillations (Mekheyev-Smirnov-Wolfenstein effect), which would be important in the case of neutrino emission from the GRB jet inside a star because of the high density \cite{waxmanbahcall97, fraija15, xiaodai15}. These are interesting future works. \ \\ We thank K. Kohri, K. Asano, R. Yamazaki, H. Takami, K. Murase and K. Kashiyama for useful comments. This work is supported by the Grants-in-Aid for Scientific Research No. 26287051, 26287051, 24103006, 24000004 and 26247042 (K.I.). \begin{figure}[htbp] \includegraphics[scale=1.3]{f1.eps} \caption{The acceleration timescale, cooling timescales via synchrotron emission and inverse Compton scattering, and decay timescale of charged pions $\pi^+$ in the internal shock occurring inside a progenitor of an ultralong GRB (measured at the shock rest frame). The photomeson timescale and dynamical timescale are also shown. Used parameters are $L_B=10^{47}~{\rm erg}~{\rm s}^{-1}$, $\Gamma=80$, $\Delta t=10^{-3}~{\rm s}$, $\beta_-=0.5$, and $\epsilon_{B}=0.01$. } \label{f1pi} \end{figure} \begin{figure}[htbp] \includegraphics[scale=1.3]{f2.eps} \caption{The same as in Fig. 1, but for muons $\mu^{\pm}.$ } \label{f1mu} \end{figure} \begin{figure}[htbp] \includegraphics[scale=1.3]{f3.eps} \caption{The energy flux of $\nu_{\mu}+\bar{\nu}_{\mu}$ (red lines) and $\nu_e$ (blue lines) expected from a low-power GRB (where the flavor oscillation during propagation is not taken into account), normalized to the flux of electron neutrinos $E_{\nu}^2\Phi_{\nu_e}$ at low energy. Used parameters are the same as in Fig. 1., and the pion production spectrum is assumed as $Q_{\pi,0}(p) \propto p^{-\alpha}$ where $\alpha=4$. The muon neutrino flux and the electron neutrino flux are divided into two components: those coming from reaccelerated pions and/or muons (dashed lines) and those coming from the pions and/or muons advected to the downstream region (dotted lines). In the low energy range, the $\nu_e$ flux and $\nu_{\mu}+\bar{\nu}_{\mu}$ flux are dominated by the latter component [$\propto Q_{\pi,0}$, see Eq.(\ref{nuenonacclow}) and (\ref{numunonacclow})]. The $\nu_e$ and $\nu_{\mu}+\bar{\nu}_{\mu}$ fluxes drop above the energy where the decay timescales of a muon and a pion are equal to the dynamical timescale, being proportional to $\simeq Q_{\pi,0}t_{\rm dyn}/\tau_{\mu}$ and $\simeq Q_{\pi,0}t_{\rm dyn}[1/\tau_{\mu}+c_2/\tau_{\pi}]$, respectively, where $c_2\simeq \xi_{\mu}^{-\alpha+1}(\xi_{\nu_{\mu}}\xi_{\bar{\nu}_{\mu}})^{\alpha}\simeq 10^{-4}$ is a constant [see Eqs. (\ref{nuenonacchigh}) and (\ref{numunonacchigh})]. In the high energy range, the fluxes are dominated by the neutrinos from reaccelerated pions and/or muons: the $\nu_e$ and $\nu_{\mu}+\bar{\nu}_{\mu}$ fluxes are proportional to $\simeq Q_{\pi,0}t_{\rm acc}^2/\tau_{\pi}\tau_{\mu}$ and $\simeq Q_{\pi,0}[t_{\rm acc}^2/\tau_{\pi}\tau_{\mu}+c_1t_{\rm acc}/\tau_{\pi}]$, respectively, where $c_1$ is a constant, the coefficient in front of $\tau_{\mu}/t_{\rm acc}$ in Eq. (\ref{ratiosource}) [see Eqs. (\ref{nuehelimit}), (\ref{numuhelimit}) and (\ref{numubarhelimit})]. The sum of two components are shown by solid lines. Note that, if the energy is higher than $\sim 2\times 10^{16}~{\rm eV}$ at the observed frame ($\sim 3\times 10^{15}~{\rm eV}$ at the shock rest frame), the spectra would have a cutoff due to the synchrotron cooling of pions and muons (see Figs. 1 and 2), which is not taken into account in the current calculation, and therefore the plots above this energy (shown with thin grey lines) would be suppressed. Note also that, if the energy is higher than a few times $\sim 10^{17}~{\rm eV}$ at the observed frame (a few times $10^{16}~{\rm eV}$ at the shock rest frame), where the acceleration timescale is longer than the dynamical timescale, the neutrino spectra would have a cutoff because there would be no accelerated particles generating neutrinos with such energy.} \label{f3} \end{figure} \begin{figure}[htbp] \begin{tabular}{cc} \resizebox{80mm}{!}{\includegraphics{f4.eps}} & \resizebox{80mm}{!}{\includegraphics{f4p.eps}} \\ \end{tabular} \caption{Energy dependence of the flavor ratio of $\nu_{\mu}+\bar{\nu}_{\mu}$ to $\nu_e$ ({\it left}) and that of the ratio of $\nu_e$ to the total neutrino flux ({\it right}) at the source for low-power GRBs. Used parameter sets $(L_B, \Gamma, \Delta t, \beta_-)$ are $(10^{47}~{\rm erg}~{\rm s}^{-1}, 80, 10^{-3}~{\rm s}, 0.5)$ (solid line) and $(10^{46}~{\rm erg}~{\rm s}^{-1}, 10^2, 5\times10^{-3}~{\rm s}, 0.5)$ (dashed line). As stated in the caption of Fig. 3, in the higher energy range where the acceleration timescale is longer than the cooling timescale and/or the dynamical timescale (shown with thin grey lines) the ratios would be modified from those shown in these figures.} \label{f4} \end{figure} \begin{figure}[htbp] \begin{tabular}{cc} \resizebox{80mm}{!}{\includegraphics{f5.eps}} & \resizebox{80mm}{!}{\includegraphics{f5p.eps}} \\ \end{tabular} \caption{Energy dependence of the flavor ratio of $\nu_{\mu}+\bar{\nu}_{\mu}$ to $\nu_e+\bar{\nu}_e$ ({\it left}) and that of the ratio of $\nu_e+\bar{\nu}_e$ to the total neutrino flux ({\it right}) observed at the Earth for low-power GRBs. Used parameter sets $(L_B, \Gamma, \Delta t, \beta_-)$ are $(10^{47}~{\rm erg}~{\rm s}^{-1}, 80, 10^{-3}~{\rm s}, 0.5)$ (solid line) and $(10^{46}~{\rm erg}~{\rm s}^{-1}, 10^2, 5\times10^{-3}~{\rm s}, 0.5)$ (dashed line). Flavor oscillation during propagation is taken into account. As in Figs. 3 and 4, in the high energy range (thin grey lines) the ratios would be modified.} \label{f5} \end{figure}
{ "timestamp": "2015-10-09T02:04:00", "yymm": "1504", "arxiv_id": "1504.03417", "language": "en", "url": "https://arxiv.org/abs/1504.03417" }
\section{\label{sec:level1}First-level heading} Extensive theoretical \cite{paper1,paper2} and experimental \cite{paper3,paper4,paper5} research has been devoted to one-dimensional (1D) InAs and InSb semiconductor nanowires (NWs) over the past few years. Such NWs, when put in proximity to a superconductor, are expected to host Majorana topological states due to strong spin-orbit coupling. The lack of a Schottky barrier with the metallic contact and the relatively large Land\'{e} factor make the experimental conditions for observing these exotic topological states quite feasible. Indeed, a zero bias peak as a signature of Majorana states was reported in InSb \cite{paper4,paper5} and InAs \cite{paper3} NWs in proximity to superconducting Nb and Al films, respectively. One of the main features in such 1D wires is the existence of ballistic helical states which are theoretically predicted~\cite{paper6} to exhibit nonmonotonic (up and down) conductance steps of size $G_{0}=e^{2}/h$ as the electron density is varied by the gate voltage. Although many attempts have been made to measure these conductance steps, they have not been observed yet in either InAs or InSb NWs. This indicates that disorder plays an essential role, preventing the motion of the electrons between the contacts from being ballistic. It is well known that in 1D electron-electron interactions, described by the Luttinger-liquid (LL) model, amplify the role of disorder significantly, causing the conductance to vanish at zero temperature even for very weak disorder \cite{paper8}. Experimentally, however, the effects of the interactions in NWs with strong spin-orbital scattering have not yet been reported. In this letter we report on experimental studies of electronic transport in disordered InAs NW at low temperatures over a wide range of electron densities. At very low densities we find the transport to be governed by Coulomb blockade, and at relatively high electron density by sequential tunneling through a series of barriers present in the disordered NW. We demonstrate that in both regimes the conductance is strongly affected by electron-electron interactions. The analysis of the temperature dependence of the conductance and of the line shape of the resonant tunneling in the Coulomb blockade regime within the framework of the existing theories \cite{paper8} allows us to deduce the corresponding LL parameter $g$. We show that in our NWs the effective LL parameter reaches a value less than $1/2$, leading to a \emph{decrease} in the Coulomb blockade peak to valley difference as the temperature is reduced. To the best of our knowledge, this phenomenon, predicted by the LL model, has never been experimentally observed before: While there were a number of experimental papers \cite{Meirav,Moser} in which the Coulomb peaks decreased with decreasing temperature, this behavior was sporadic, namely did not occur for consecutive peaks. Thus these previous results do not follow the predictions of the LL theory, but are rather consistent with stochastic Coulomb blockade \cite{Glazman}, while the opposite is true for our results, as we discussed below. InAs NWs approximately 2~$\mu${m} long and 50~nm in diameter were grown by Au-assisted vapor-liquid-solid molecular beam epitaxy on a 2{'}{'} SiO$_2$/Si substrate. A $\sim$1~nm gold layer was evaporated in situ in a chamber attached to the MBE growth chamber after degasing of the substrate at 600~$^{\circ}$C. The substrate was heated to 550~$^{\circ}$C after being transferred to the growth chamber to form gold droplets, then cooled down to the growth temperature of 450~$^{\circ}$C. Indium and As$_{4}$ were evaporated at a V/III ratio of 100. The NWs were studied by SEM and TEM and were found to have a uniform morphology with no tapering and a pure wurtzite structure with a negligible number of stacking faults \cite{paper9}. The NWs were deposited randomly from an ethanol suspension onto 300 nm thick SiO$_2$\ thermally grown on a p$^{+}$-Si substrate, to be used as a back gate. The NW were then mapped with respect to alignment marks using an optical microscopy. Ti/Al (5~nm/100~nm) contact leads, 650 nm separated, were deposited on the NWs, using electron beam lithography and electron beam evaporation, see Fig.~\ref{fig1}. A short dip in an ammonium polysulfide solution \cite{Ammoniumsolfide} was used for removing the oxide from the InAs NWs surface prior to contacts deposition. InAs NWs are highly sensitive to surface impurities and other imperfections (such as surface steps and dangling bonds) since its conductance electrons are near the surface. Hence, impurities resulting from sample fabrication and the external environment, as well as the substrate on which the sample is placed, induce disorder potential barriers. \begin{figure}{} \centering \includegraphics[width=0.4\textwidth]{fig112.jpg} \caption{(a) SEM image of a typical sample. The conductance was measured by the four-terminal method, in which the current was passed between two probes (denoted as I$+$ and I$-$), while the voltage was measured between two other probes (V$+$ and V$-$). (b) SEM image of InAs NWs as grown on SiO$_2$/Si substrate. (c) TEM image of an InAs NW. }\label{fig1} \end{figure} Conductance was measured by a four-terminal method using a low-noise analog lock-in amplifier (EG\&GPR-124A). The current was passed between two probes (I$+$ and I$-$ in Fig.~\ref{fig1}), while the voltage was measured by two different probes (V$+$ and V$-$ in Fig.~\ref{fig1}). It should be noted that I and V are connected to the NW at the same point, so that the contact resistance is always included in the conductance measured. The measurements were done in a $^{4}$He cryogenic system in a temperature range of 1.7~K--4.2~K. Fig.~\ref{fig2} shows the measured conductance $G$ as a function of gate voltage, $V_{g}$, of an InAs NW over a wide range of gate bias. The as-grown NWs are conducting and the gate voltage bias required to pinch off the conductance is $V_{g}=-0.35$~V. At low values of the gate voltage (left panel) a series of distinct conductance peaks is clearly observed, with a typical spacing of $\delta{V_{g}}\sim$25~mV. At high gate voltage values (exceeding 0.5~V, right panel), the variation of the conductance is much smoother, with an overall tendency to increase with increasing gate voltage. At lower temperatures the conductance peaks become sharper, but the value of the conductance at the peaks is reduced. \begin{figure}{} \centering \includegraphics[width=0.5\textwidth]{fig2_4-eps-converted-to.pdf} \caption{(Color online) Conductance of an InAs NW (diameter: 50~nm; length: 650~nm) as function of the gate voltage, at $4.2$~K (black curve) and $1.7$~K (blue curve). The first conductance peak in marked by \#1, and the tenth by \#10.}\label{fig2} \end{figure} The behavior at low values of the gate voltage (left panel of Fig.~\ref{fig2}) indicates the occurrence of a Coulomb blockade, similar to quantum dots. The peak spacing in our device is shown in Fig.~\ref{fig3}. It is well known that the conductance peaks in quantum dots are equally spaced if the energy level spacing between single electron states in the dot, $\Delta$, is negligible relative to the charging energy, and could become irregularly-spaced in the opposite limit \cite{QDreview}. Since here the distance between peaks varies by over 50\%, the latter case is realized in our sample. In this regime every peak corresponding to an odd number of electrons in the dot should be separated from the previous one by a roughly constant charging energy $E_{c}$, whereas the next peak should be separated by $E_{c}+\Delta$, and thus vary from level to level. Indeed, we see that every second peak of the first 10 peaks has a gate voltage spacing of 25~mV (with the exception of the distance between the 5th and 6th peaks which is slightly lower ($\approx$21~mV). This value of $\delta{V_{g}} = 25$~mV should thus correspond to the charging energy, and is related to gate capacitance $C_{g}$ via $\delta{V_{g}}=e/C_{g}$, yielding a value of $C_{g}=6.4\cdot10^{-18}$ F. \begin{figure}{} \centering \includegraphics[width=0.4\textwidth]{fig3_2-eps-converted-to.pdf} \caption{The peak distance in mV. Its value at peak number n indicates the distance between peak \#n and peak \#n+1 taken at $T=1.7$~K.}\label{fig3} \end{figure} Since the geometry of the sample and its dimensions are known, we can estimate the size $L_{QD}$ of the quantum dot from the expressions for the capacitance of a cylinder in the vicinity of a conducting plate, $C_g=2\pi\varepsilon{L_{QD}}/\ln(4d/D_\text{wire})$, where $\varepsilon$ is the dielectric constant, $D_\text{wire}$ is the wire diameter, and $d$ is the SiO$_2$~thickness. Substituting the values of the sample dimensions, the average dielectric constant of $^{4}$He and SiO$_2$~($\varepsilon \approx 2.5\varepsilon_{0}$), and the estimated value of the capacitance, gives $L_{QD} \approx 200$~nm. We see that $L_{QD}$ is smaller than the NW length ($L= 2$~$\mu$m) by an order of magnitude, and smaller by more than a factor of 3 compared to the separation between the voltage leads ($L_{vp} = 650$~nm). Thus it is legitimate to assume that the QD is formed as a puddle of 1D electrons separated by two barriers on both its sides, somewhere in the segment of the NW between the voltage leads. In addition, we see that the segments of the NW to the right and left of the dot are long enough so that their single particle level spacings and charging energies are well below the temperatures reached in our experiment, allowing us to describe them as infinite 1D leads. In such a case it is reasonable to carry out our data analysis in the framework of the theory of tunneling between two 1D NWs through a quantum dot. Resonant and sequential tunneling were well studied theoretically and experimentally in the past for both interacting and non-interacting 1D electrons, see \emph{e.g.}, Ref.~\cite{paper8} and references therein. In our system the peaks widths are found to scale linearly with the temperature $T$. We thus try to explain our results using Furusaki's expression \cite{paper10} for the conductance due to sequential electron tunneling in a QD connected to LL leads. The line shape of a single conductance peak as a function of the energy $E$ (distance from the peak), is then: \begin{equation} G(E,T)=A G_{0}\frac{\gamma(T)}{T \cosh \left( \frac{E}{2k_{B}T} \right)} \left| \Gamma\left(\frac{1}{2g}+i\frac{E}{2\pi{k_{B}}T} \right) \right |^{2}, \label{eq1} \end{equation} where $A$ is a constant related to the asymmetry and height of the barriers defining the dot, the factor $\gamma(T) = T^{1/g-1}$ accounts for the renormalization of the tunneling rates by the LL effects, and $\Gamma(z)$ is the gamma function. Note that the temperature variation of $G$ at the peak is then \begin{equation} G_\text{max}(T) \propto T^{1/g-2}. \label{eqn:gmax} \end{equation} In all the above expressions $g$ is the effective LL interaction parameter; $g=1$ for a non-interacting NW and decreases ($g<1$) with increasing repulsive interactions. It is a combination of the charge and spin interaction parameters, as we discuss below. The experimental data in Fig.~\ref{fig2} shows that both the height and the width of the conductance peaks decrease as temperature is reduced. Thus the interaction parameter $g$ should be smaller than $1/2$. Our experimental data (Fig.~\ref{fig5}) shows that indeed the temperature dependence of the height of each peak can be well described by the power law, Eq.~(\ref{eqn:gmax}), from which we can deduce the value of $g$ for each peak. For the first two peaks we find $g = 0.38\pm0.03$. \begin{figure}{} \centering \includegraphics[width=0.4\textwidth,height=0.25\textheight]{fig5_4-eps-converted-to.pdf} \caption{(Color online) The experimental conductance peak heights as a function of temperature and fits to Eq.~(\ref{eqn:gmax}), for the first three peaks of Fig.~\ref{fig2}.}\label{fig5} \end{figure} \begin{figure}{} \centering \vspace{1pt} \includegraphics[width=0.4\textwidth]{fig4_7-eps-converted-to.pdf} \caption{(Color online) Fit of the first three peaks in Fig.~\ref{fig2} to Eq.~(\ref{eq1}) at 4.2~K, 3.3~K, and 1.7~K.}\label{fig4} \end{figure} In order to verify that the line-shape of the Coulomb blockade peaks as a function of the gate voltage can be described by expression~(\ref{eq1}), we fit the gate voltage dependence of our data to a sum of terms (one for each peak) of the form of Eq.~(\ref{eq1}), with $E=\alpha(V_g-V_0)$, where $V_{0}$ is the gate voltage value at the peak and $\alpha$ is the ratio between the gate capacitance and the total capacitance of the dot. $\alpha$ and the amplitude $A$ are used as fitting parameters, and for $g$ we plug in the value extracted from the data analysis presented in Fig.~\ref{fig5}. The result for the first 3 peaks is shown in Fig.~\ref{fig4}. We find that, as expected, $\alpha = 0.1\pm0.005$ does not vary between the peaks and/or as function of temperature indicating that Eq.~(\ref{eq1}) indeed gives a consistent description of our dataset. We have performed a similar fit procedure for other peaks, and in Figs.~S1--S2 of the supplementary material \cite{SMArchive} we show the results for peaks \#16--\#18. Since the Fermi energy is larger, the value of $g$ is expected to be higher for these peaks, and we indeed find $g\approx 0.5$. We now turn to the analysis of the data at high gate voltages (right panel of Fig.~\ref{fig2}), where the Coulomb blockade oscillations are no longer observed. The conductance at this range of gate voltages also exhibits a power-law decrease as the temperature is decreased. Fig.~\ref{fig6} shows the experimental data for the temperature variation of the conductance at $V_{g}=3.5V$, together with a power-law fit. One can argue that in this regime the electronic conductance can be described by a model of a LL with strong disorder, in which the conductance is given by \cite{paper8}: \begin{equation} G(T) \propto T^{1/g-1}. \label{eq3} \end{equation} From the fit we find that at high gate voltages $g \approx 0.74$, higher than the values extracted within Coulomb blockade regime for a much lower gate voltage. It is indeed expected that the interaction constant should increase as the Fermi energy is increased, since $g$ depends on the ratio between the Coulomb energy $U$ and the Fermi energy $E_{F}$ in the NW. Moreover, we might have more than one conducting channel in the NW at such high gate voltage, which could also lead to the increase in the value of $g$. \begin{figure}{} \centering \includegraphics[width=0.4\textwidth,height=0.25\textheight]{fig7-eps-converted-to.pdf} \caption{(Color online) Fit to Eq. \ref{eq3} (solid blue curve) to the conductance vs. Temperature at $V_{g}=3.5V$}\label{fig6} \end{figure} Another interesting feature of the conductance variation at low gate voltage is the non-monotonic variation of the average conductance values: As one can see in Fig.~\ref{fig2}, while the average conductance is generally increasing along almost the entire range of the gate bias, a local minimum around $V_{g}\approx0.2V$ is clearly observed. The monotonic increase of the average conductance is expected, since the transmission of the potential barriers should increase with energy, and, moreover, electrons start to populate additional bands at higher gate bias. The well-pronounced minimum of the average conductance which is surprising. One could speculate that it might be related to the opening of an energy gap in the higher bands due to disorder and strong spin-orbit coupling, as was recently predicted theoretically~\cite{Meng}. However, we cannot verify this scenario quantitatively. As we pointed out earlier, the reduction of the conductance peaks at low temperatures has been observed in previous experiments \cite{Meirav,Moser}, but their results are markedly different than ours. In previous experiments, the decrease was sporadic, occurring only for non-consecutive peaks, and thus cannot be accounted for by the LL picture, but rather indicates a stochastic Coulomb blockade \cite{Glazman}. In contrast, in our system the peak reduction occurs in a similar way for several consecutive peaks \footnote{At relatively high temperatures the conductance of a few consecutive peaks in \cite{Meirav} does decrease as the temperature is decreased, but the peak to valley difference increases, as opposed to our results, cf. Fig.~S3 of the supplementary material \cite{SMArchive}}. Now we address the question of why in our InAs NWs the effective LL parameter $g$ is smaller than $1/2$ at low filling (so that $G_\text{max}$ \emph{decreases} with decreasing temperature), while other experimental studies of 1D quantum wires, \textit{e.g.}, carbon nanotubes \cite{nanotubeLL}, GaAs wires formed at the cleaved edge overgrowth of a GaAs/AlGaAs heterostructure \cite{Yacobi}, or at the bottom of a V-grooved GaAs growth \cite{Vgroove}, all exhibit effective LL parameters higher than $1/2$ (so that $G_\text{max}$ \emph{increases} with decreasing temperature). We believe that there are two main reasons which contribute to the lower value of the LL parameter in our InAs NWs. The first is related to the environment of the quantum wires. Both types of GaAs wires reported in the literature \cite{Yacobi,Vgroove} were created within 2DEG structures embedded well inside a semiconductor material with a large dielectric constant (AlGaAs and GaAs). In contrast, while the InAs NWs have a similar dielectric constant to GaAs, they are placed on SiO$_2$~surface, so the surrounding materials, namely, $^4$He and SiO$_2$, possess much smaller dielectric constants. These reduced dielectric constants enhance the effect of Coulomb interaction in our system as compared to the GaAs wires reported before.. Carbon nanotubes could have even lower dielectric constant (especially when suspended), but, as we now discuss, their high channel symmetry is responsible for enhancing the effective $g$. The second reason for observing a smaller LL parameter in our system is related to an inherent property of InAs, which possesses strong spin-orbit coupling that breaks the spin rotation symmetry. It should be noted that the effective LL parameter $g$ is related to the interaction parameters in the charge and spin channels ($g_c$ and $g_s$, respectively) by \cite{paper10} \begin{equation} \frac{1}{g}=\frac{1}{2g_{c}}+\frac{1}{2g_{s}}. \label{eq5} \end{equation} In GaAs spin-orbit coupling is very small, therefore spin-rotation symmetry dictates that $g_s=1$. Thus, $g<1/2$ can only be obtained if the interaction in the charge sector is extremely strong, $g_c<1/4$. In carbon nanotubes the additional valley degeneracy results in $1/g = 1/(4g_c)+3/4$, so $g<1/2$ results in an even stricter condition, $g_c<1/5$. On the other hand, in InAs spin rotation symmetry is broken, allowing for $g_s<1$ and making it easier to reach $g<1/2$. Therefore we believe that the combination of a lack of orbital degeneracy due to strong spin-orbit coupling and of the low effective dielectric constant, makes our InAs NW a unique system where strong effective interactions, $g<1/2$, can be achieved, and thus a decrease in Coulomb blockade peak heights with decreasing temperature can be observed. Finally, our experiment concentrated on the sequential tunneling regime, at lower temperature we expect resonant tunneling to become dominant near the Coulomb blockade peaks, and the Kondo effect to show up in odd charge valleys, as recently found in measurements carried out on short InAs NWs (where LL effects are absent) \cite{Kretinin}. \begin{acknowledgments} We are thankful to Ronit Popotitz-Biro for professional TEM study of the InAs NWs. We gratefully acknowledge support by the ISF BIKURA program and GIF (MG); ISF and Marie Curie CIG grants (ES); as well as ISF grant \#532/12 and IMOST grants \#3-11173 (AP and HS) \& \#3-8668 (HS). \end{acknowledgments}
{ "timestamp": "2015-07-29T02:10:46", "yymm": "1504", "arxiv_id": "1504.03463", "language": "en", "url": "https://arxiv.org/abs/1504.03463" }
\section{Introduction} \IEEEPARstart{P}{atch} based image denoising in conjunction with the notion of non-locality has led to several state-of-the-art algorithms \cite{buades2005non,dabov2007image, elad2006image,mairal2009non,Dongetal_SAIST_6319405}. These algorithms exploit certain prior knowledge like image redundancy and sparse representation of images in suitable transform domains. These assumptions are found to be generally true in case of natural images and greatly improve the denoising results. \par In addition to sparsity and redundancy priors, a low rank assumption has also been proposed recently in \cite{Dongetal_SAIST_6319405,GuWNNM6909762}. According to low rank prior, a matrix constructed by stacking non-local similar patches from a given noisy image resides in a low dimensional subspace of a high dimensional space and therefore satisfies the low rank criterion. Dong \textit{et al}. \cite{Dongetal_SAIST_6319405} introduced an iterative soft thresholding scheme (SAIST) using $L_{1,2}$ grouped sparsity regularization to penalize the singular values of the low rank matrices. More recently, Gu \textit{et al}. \cite{GuWNNM6909762} have further explored the low rank prior using weighted nuclear norm minimization (WNNM). This algorithm exploits the physical significance of the singular values and treats them differently according to their respective magnitudes. Both low rank minimization algorithms \cite{Dongetal_SAIST_6319405,GuWNNM6909762} have produced outstanding denoising results by exploiting low rank prior confirming to its validity as a suitable assumption for image restoration problem. \par In this paper, we investigate WNNM algorithm for its noise estimation scheme. The motivation to consider the noise estimation relies on the fact that the residual noise is one of the key factors for subsequent computation of singular values and corresponding threshold weights which are the indispensable ingredients of low rank minimization schemes,in particular, for WNNM algorithm. WNNM algorithm estimates residual noise by comparing the grouped patch matrices in a given noisy image with the corresponding patch matrices in its denoised version obtained in the previous iteration. In practice, this difference is intuitively assumed to be the noise. However, this assumption may not be generally true, in particular, for the images with the complex geometric structures \cite{Liu_GNOISE_EST_6466947}. \par In order to obtain a more reliable estimate of residual noise, we note that the geometric structure of the image should also be taken into account. To the best of our knowledge, this aspect of residual noise estimation for WNNM algorithm has remained unnoticed up to now. By considering the proposed contribution to the existing residual noise estimation, we have obtained remarkable improvements in the denoising results of the original WNNM algorithm for most of the test images. In particular, for moderate and high levels of noise these improvements are quite significant. \par We also propose to reinforce edges and texture during iterative denoising by splitting the singular value decomposition (SVD) into low and high components. Subsequently, two denoised images are obtained using low and high components of the SVD, respectively. The difference of these two images is used as feedback to reinforce edges and textural regions during iterative denoising process. This modification further enhances the denoising capability of the proposed algorithm. \par The rest of the paper is organized as follows. We briefly describe low rank minimization algorithm for image denoising in Sec.~\ref{sec:WNNM_2}. The proposed algorithm and its implementation are presented in Sec.~\ref{sec:PoposedAlgo_3}. Experimental settings and results are described in Sec.~\ref{sec:results4}. Finally, conclusions are drawn in Sec.~\ref{sec:Conc5}. \section{Low Rank Minimization for image denoising}\label{sec:WNNM_2} We consider the well known image denoising problem \begin{equation}\label{eq:noisyimage} {\bf y}={\bf x}+{\bf n}, \end{equation} where ${\bf x}$ is the original image to be recovered from the noisy observed image ${\bf y}$ and ${\bf n}\sim \mathcal{N}\left(0,\sigma_n\mathbb{I}\right)$ is an additive white Gaussian noise with known standard deviation $\sigma_n$. In terms of patch based formulation, non-local similar patches are searched for local patch ${\bf p_{y_j}}$ at each pixel location $j$ in the observed image. Afterwards, the patches similar to a patch ${\bf p_{y_j}}$ can be stacked to construct a matrix $\bf M_{y_j}$. Using patch based formulation, the original problem (\ref{eq:noisyimage}) can be expressed as \cite{GuWNNM6909762} \begin{equation}\label{eq:MatrixFormofProblem} \bf M_{y_j}=\bf M_{x_j}+\bf M_{n_j}, \end{equation} where ${\bf M_{x_j}}$ and ${\bf M_{n_j}}$ denote the corresponding patch matrices of original image and additive noise, respectively. \par The patch matrix ${\bf M_{x_j}}$ defined in (\ref{eq:MatrixFormofProblem}) is intuitively assumed to be a low rank matrix because of the image redundancy and similarity prior \cite{Dongetal_SAIST_6319405}. Thus, a noise free patch can be recovered by using low rank minimization. However, low rank minimization is a non-convex NP hard problem \cite{CaiCandes_SVDThr} \cite{Candes:2011:RPC:1970392.1970395}. Therefore, nuclear norm minimization (NNM) has been heuristically proposed as its convex regularization which can be written as \cite{GuWNNM6909762,CaiCandes_SVDThr} \begin{equation}\label{eq:NNM_form} {\bf \hat{M}_{x_j}}=\arg \min_{\bf {M}_{x_j}} \frac{1}{\sigma_n^2} \left\|\bf {{M}_{y_j}}-\bf{{M}_{x_j}}\right\|_F^2 +\left\|\bf{{M}_{x_j}}\right\|_{*}, \end{equation} where $\left\|.\right\|_{*}=\sum_i\left|\lambda_i\left(.\right)\right|$ is the nuclear norm defined using the singular values $\lambda_i$ of the matrix $\bf{{M}_{x_j}}$ and $\left\|.\right\|_F$ denotes Frobenius norm. Convexity of the reformulated problem (\ref{eq:NNM_form}) assures the global optimal solution through singular value decomposition \cite{CaiCandes_SVDThr} \begin{eqnarray}\label{eq:NNM_solution} \nonumber U\Lambda V^T&=&svd\left({\bf \hat{M}_{x_j}}\right),\\ \Lambda_\theta&=&S_\theta\left(\Lambda\right)=\max(\lambda_i-\theta,0), \end{eqnarray} where $\theta$ denotes a soft threshold applied to all the singular values $\lambda_i$ of the matrix $\bf{{M}_{x_j}}$ without considering the importance of the large and small singular values. \par To consider physical significance of singular values, Gu \textit{et al}. \cite{GuWNNM6909762} proposed weighted nuclear norm minimization (WNNM) algorithm which replaces $\left\|.\right\|_{*}$ by $\left\|.\right\|_{w,*}$ in (\ref{eq:NNM_form}) where $\left\|.\right\|_{w,*}=\sum_i\left|w_i\lambda_i\left(.\right)\right|$ is the weighted nuclear norm and the weights $w_i$ are defined as \cite{GuWNNM6909762} \begin{equation}\label{eq:weightedthreshold} w_i=\frac{c\sqrt{m}}{\left(\lambda_i+\epsilon\right)}, \end{equation} where $c>0$ is a constant and $m$ is the number of non-local patches similar to the given patch ${\bf p_{y_j}}$. To avoid possible division by zero, we set the constant $\epsilon=10^{-16}$. Note that in this case, the solution is similar to (\ref{eq:NNM_solution}) with the exception that $\Lambda_\theta$ is now written as \begin{equation}\label{eq:WNNM_solution} \Lambda_{\bf w}\equiv S_{\bf w}\left(\Lambda\right)=\max\left(\lambda_i-w_i,0\right). \end{equation} It can be observed from (\ref{eq:weightedthreshold}) and (\ref{eq:WNNM_solution}), the larger singular values are less penalized as compared to the smaller ones in accordance with the basic requirement of the image denoising. However, the global optimal solution is not guaranteed because WNNM becomes a non-convex problem due to the non-descending threshold weights $w_i$ \cite{GuWNNM6909762}. Nevertheless, Gu \textit{et al}. \cite{GuWNNM6909762} have shown that iteratively, this optimization scheme reaches its local minimum fixed point solution. \section{Proposed Algorithm}\label{sec:PoposedAlgo_3} In the following, we describe how the existing residual noise estimation used in \cite{Dongetal_SAIST_6319405,GuWNNM6909762} can be further enhanced by considering the proposed modification. \subsection{Proposed Noise Estimation Method} Due to the pivotal role of residual noise, it is highly desirable to quantify the amount of noise left in the current denoised image for further processing in the next iteration. This remaining noise is termed as residual noise in this paper. The variance of the residual noise for next iteration $(k+1)$ can be defined as the difference between the variance of the initial given white Gaussian noise $\sigma_n^2$ and that of the filtered noise $\left(\sigma_{flt}^{(k)}\right)^2$ at previous iteration $k$. In WNNM algorithm, the variance of the filtered noise is estimated by comparing grouped patches matrix at location $j$ in the given noisy image ${\bf y}$ and the corresponding matrix in its previously denoised version ${\bf y}^{(k)}$. For simplicity, the variance of filtered noised can be expressed at image level as \cite{Dongetal_SAIST_6319405} \begin{equation}\label{eq:varianceFilteredNoise} \left(\sigma_{flt}^{(k)}\right)^2= \left\|{\bf y}-{\bf y}^{(k)}\right\|_{l_2}. \end{equation} By using (\ref{eq:varianceFilteredNoise}), Dong \textit{et al}. \cite{Dongetal_SAIST_6319405} proposed an intuitive estimate for the standard deviation of the residual noise as \begin{equation}\label{eq:residualNoise_WNNM} \sigma_{res}^{(k+1)}=\gamma\sqrt{\sigma_n^2- \left(\sigma_{flt}^{(k)}\right)^2}, \end{equation} where $\sigma_{res}^{(k+1)}$ denotes the estimated standard deviation of the residual noise present at $(k+1)^{th}$ iteration. The constant $\gamma>0$ is a scaling factor, heuristically introduced to control the re-estimation of the standard deviation \cite{Dongetal_SAIST_6319405}. Using this noise estimate, the singular values of local patch matrix ${\bf M_{y_j}}^{(k+1)}$ are adjusted as \cite{GuWNNM6909762} \begin{equation}\label{eq:EstimatedSingularValues} \lambda_i\left(\!{\bf M_{y_j}}^{\!(k+1)}\!\right)= \sqrt{\max\left(\lambda_i^2\left({\bf M_{y_j}}^{\!(k)}\right)\!-m\left(\sigma_{res}^{\!(k+1)}\right)^2,0\right)}. \end{equation} It is important to note that the noise estimation approach (\ref{eq:residualNoise_WNNM}) utilizes $l_2$ norm (\ref{eq:varianceFilteredNoise}) which is sensitive to outliers, more specifically, in the presence of severe noise. More importantly, the difference (\ref{eq:varianceFilteredNoise}) between a given noisy image and its filtered version is intuitively taken as noise. However, this assumption may not be generally true, in particular, for the images with the complex geometric structures \cite{Liu_GNOISE_EST_6466947}. In fact, the filtered noise also contains some geometric structure which is lost during denoising. Thus, the above expression of residual noise (\ref{eq:residualNoise_WNNM}) needs certain modification to account for the geometric details of the previously denoised image for residual noise estimation. In order to identify the image structure in terms of patches, Zhu and Milanfar \cite{ZHU_MILANFAR_WEAKTEXTURE5484579} proposed the diagonalization of gradient covariance matrix $C$ for each patch ${\bf p_{y_j}}$. The covariance matrix $C$ is defined as \cite{ZHU_MILANFAR_WEAKTEXTURE5484579} \begin{equation}\label{eq:GradientCovarianceMat} C=G^T G=\sum_{l\in {\bf p_{y_j}}}\left(\begin{array}{cc} g_x^2(l) & g_x(l)g_y(l) \\ g_y(l)g_x(l) & g_y^2(l) \end{array}\right), \end{equation} where $[g_x(l),g_y(l)]^T$ denotes the gradient vector at location $l$ within the patch ${\bf p_{y_j}}$. The large and small eigenvalues of the covariance matrix $C$ represent the directions of maximum and minimum variations in the image structure, respectively. Yan-Li \textit{et al}. \cite{Liu_GNOISE_EST_6466947} utilized this concept of gradient covariance matrices to identify the weak textured patches for the estimation of the noise variance in a given noisy image. The standard deviation, $\sigma_{geom}^{(k)}$, of the noise is subsequently estimated by applying PCA to these weak textured patches iteratively. We propose to consider this geometric contribution in residual noise estimation as \begin{equation}\label{eq:residualNoise_global} \sigma_{res}^{'(k+1)}=\gamma\sqrt{\sigma_n^2- \left(\sigma_{geom}^{(k)}\right)^2}. \end{equation} Finally, the modified noise estimate is expressed as a convex combination of geometric (\ref{eq:residualNoise_global}) and non-geometric contributions (\ref{eq:residualNoise_WNNM}) \begin{equation}\label{eq:combinedNoiseEstimate} \hat{\sigma}_{res}^{(k+1)}= \alpha\sigma_{res}^{(k+1)}+(1-\alpha)\sigma_{res}^{'(k+1)}, \end{equation} \begin{table*}[ht!b] \centering \caption{Denoisig results (PSNR) of various state-of-the-art methods } \renewcommand{\arraystretch}{1.3} {\scriptsize \begin{tabular}{!{\vrule width 1pt}c!{\vrule width 1pt}c|c|c|c|c!{\vrule width 1pt}c|c|c|c|c!{\vrule width 1pt}c|}\noalign{\hrule height 1pt} &\multicolumn{5}{c|}{${\bf \sigma_n=10}$}& \multicolumn{5}{|c!{\vrule width 1pt}}{${\bf\sigma_n=30}$}\\ \noalign{\hrule height 1pt} {\bf Image}&{\bf BM3D}&{\bf LSSC}&{\bf SAIST}& {\bf WNNM}&{\bf GWNNM}&{\bf BM3D}&{\bf LSSC}&{\bf SAIST}&{\bf WNNM}&{\bf GWNNM}\\\noalign{\hrule height 1pt} C. Man&34.18&34.24&34.30&34.44&\textbf{34.45}&28.64&28.63&28.36&\textbf{28.80}&\textbf{28.80}\\\hline House&36.71&36.95&36.66&36.95&\textbf{36.98}&32.09&32.41&32.30&35.52&\textbf{35.57}\\\hline Peppers&34.68&34.80&34.82&34.95&\textbf{34.97}&29.28&29.25&29.24&29.49&\textbf{29.51}\\\hline Monarch&34.12&34.44&34.76&\textbf{35.03}&\textbf{35.03}&28.36&28.20&28.65&28.92&\textbf{28.93}\\\hline J.Bean&37.91&38.69&38.37&38.93&\textbf{38.97}&31.97&32.39&32.14&32.46&\textbf{32.56}\\\hline Lena&35.93&35.83&35.90&36.03&\textbf{36.04}&31.26&31.18&31.27&31.43&\textbf{31.50}\\\hline Barbara&34.98&34.98&35.24&\textbf{35.51}&35.49&29.81&29.60&30.14&30.31&\textbf{30.43}\\\hline F.print&32.46&32.57&32.69&\textbf{32.82}&\textbf{32.82}&26.83&26.68&26.95&26.99&\textbf{27.10}\\\hline Boat&33.92&34.01&33.91&\textbf{34.09}&34.08&29.12&29.06&28.98&\textbf{29.24}&29.21\\\hline Hill&33.62&33.66&33.65&\textbf{33.79}&\textbf{33.79}&29.16&29.09&29.06&\textbf{29.25}&29.22\\\hline Man&33.98&34.10&34.12&34.23&\textbf{34.25}&28.86&28.87&28.81&\textbf{29.00}&28.99\\ \noalign{\hrule height 1pt} &\multicolumn{5}{c|}{${\bf \sigma_n=50}$}& \multicolumn{5}{|c!{\vrule width 1pt}}{${\bf\sigma_n=100}$}\\ \noalign{\hrule height 1pt} {\bf Image}&{\bf BM3D}&{\bf LSSC}&{\bf SAIST}& {\bf WNNM}&{\bf GWNNM}&{\bf BM3D}&{\bf LSSC}&{\bf SAIST}&{\bf WNNM}&{\bf GWNNM}\\\noalign{\hrule height 1pt} C. Man&26.12&26.35&26.15&\textbf{26.42}&\textbf{26.43}&23.07&23.15&23.09&23.36&\textbf{23.39}\\\hline House&29.69&29.99&30.17&30.32&\textbf{30.42}&25.87&25.71&26.53&26.68&\textbf{26.82}\\\hline Peppers&26.68&26.79&26.73&26.91&\textbf{26.93}&23.39&23.20&23.32&23.46&\textbf{23.56}\\\hline Monarch&25.82&25.88&26.10&\textbf{26.32}&26.31&22.52&22.24&22.61&22.95&\textbf{22.99}\\\hline J.Bean&29.26&29.42&29.32&29.62& \textbf{29.71}&25.80&25.64&25.82&26.04&\textbf{26.15}\\\hline Lena&29.05& 28.95&29.01&29.24&\textbf{29.34}&25.95&25.96&25.93&26.20&\textbf{26.34}\\\hline Barbara&27.23&27.03&27.51&27.79&\textbf{27.93}&23.62&23.54&24.07&24.37&\textbf{24.49}\\\hline F.print&24.53&24.26&24.52&24.67&\textbf{24.76}&21.61&21.30&21.62&21.81&\textbf{21.87}\\\hline Boat&26.78& 26.77&26.63&\textbf{26.97}&26.93&23.97&23.87&23.80&24.10& \textbf{24.15}\\\hline Hill&27.19&27.14&27.04&\textbf{27.34}&27.31&24.58&24.47&24.29&\textbf{24.75}&24.71\\\hline Man&26.81&26.72&26.68&\textbf{26.94}&26.92&24.22&23.98&24.01&\textbf{24.36}&\textbf{24.36}\\ \noalign{\hrule height 1pt} \end{tabular}}\label{tb:ComparisonPSNR} \end{table*} where $\alpha\in(0,1)$ is tuned experimentally according to the initially given standard deviation $\hat{\sigma}_{res}^{(0)}=\sigma_n$. For low levels of initial noise $(\sigma_n<30)$, we choose $\alpha=0.9$ and for moderate and severe noise levels $(\sigma_n\ge30)$, we set $\alpha=0.8$. \begin{figure}[b] \centering \subfloat[][] {\includegraphics[width=0.8in,height=0.8in]{figure1.pdf}} \label{fig:f1_house_sn50}~ \subfloat[][] {\includegraphics[width=0.8in,height=0.8in]{figure2.pdf}} \label{fig:f2_houseBm3d_sn50}~ \subfloat[][] {\includegraphics[width=0.8in,height=0.8in]{figure3.pdf}} \label{fig:f3_houseWNNM_sn50}~ \subfloat[][] {\includegraphics[width=0.8in,height=0.8in]{figure4.pdf}} \label{fig:f4_houseGWNNM_sn50}\\ \subfloat[][] {\includegraphics[width=0.8in,height=0.8in]{figure5.pdf}} \label{fig:f5_barbara_sn50}~ \subfloat[][] {\includegraphics[width=0.8in,height=0.8in]{figure6.pdf}} \label{fig:f6_barbaraBm3d_sn50}~ \subfloat[][] {\includegraphics[width=0.8in,height=0.8in]{figure7.pdf}} \label{fig:f7_barbaraWNNM_sn50}~ \subfloat[][] {\includegraphics[width=0.8in,height=0.8in]{figure8.pdf}} \label{fig:f8_barbaraGWNNM_sn50} \caption{Columns from left to right show the noisy ($\sigma_n=50$) and denoised images using BM3D, WNNM and GWNNM for house and Barbara images, respectively. }\label{fig:house_barbara_n50} \end{figure} The relatively smaller value of $\alpha$ for $\sigma_n\ge30$ indicates that the previous noise estimation (\ref{eq:residualNoise_WNNM}) is significantly affected due to higher levels of given noise and is subjected to larger correction from the geometric counterpart. Fine tuning of $\alpha$ is also possible according to different noise levels. However, our primary objective is to highlight the importance of the proposed modification. Subsequently, the singular values can now be adjusted by plugging (\ref{eq:combinedNoiseEstimate}) into (\ref{eq:EstimatedSingularValues}). \subsection{Edge and Texture Enhancement} Due to thresholding (\ref{eq:WNNM_solution}), some of the smaller singular values may eventually reduce to zero or very close to zero. However, we propose to utilize these values in the feedback step for edge and texture reinforcement as follows. We identify large and small singular values by setting a threshold $\tau$ and then split the truncated matrix $\Lambda_{\bf w}$ as \begin{equation}\label{eq:splitLambda} \Lambda_{\bf w}=\Lambda_{\bf w}^{high}+\Lambda_{\bf w}^{low}, \end{equation} where $\Lambda_{\bf w}^{(high)}$ and $\Lambda_{\bf w}^{(low)}$ contain only large and small singular values of ${\bf M_{y_j}}$, respectively. The optimal solution (\ref{eq:WNNM_solution}) can then be re-written as \begin{eqnarray} {\bf M_{y_j}}&=& U\left(\Lambda_w^{(high)}+\Lambda_w^{(low)}\right)V^T ={\bf M_{y_j}}^{(high)}+{\bf M_{y_j}}^{(low)}.\label{eq:splitPatchMatrixM} \end{eqnarray} As a result of this splitting, we construct two corresponding images ${\bf {y}}^{(high)}$ and ${\bf {y}}^{(low)}$ which contain high and low energy components, respectively. By exploiting intrinsic nature of these two images, the difference image ${\bf {y}}^{(high)}- {\bf {y}}^{(low)}$ can be associated to sharpness and contrast enhancement. Also the noise level is gradually reduced in these two images, iteratively. Thus, the edges and texture may become more clear in the difference image at each iteration. Therefore, we intuitively propose to use this difference in the feedback step to preserve and enhance geometric details of the image during iterative denoising. Experimental results confirm the effectiveness of the proposed modification as well. \par The summary of the proposed algorithm is provided in Algorithm \ref{alg:algo1}. \begin{algorithm} \caption{Image Denoising Using GWNNM} \begin{algorithmic}[1]\label{alg:algo1} \REQUIRE {Noisy image ${\bf y}$} \STATE{Initialize ${\bf y}^{(1)}={\bf y}$, ${\bf y}_{low}^{(1)}={\bf y}$, ${\bf y}_{high}^{(1)}={\bf y}$} \FOR{k=1:L-1} \STATE{$ {\bf y}^{(k+1)}= {\bf y}^{(k)} +\delta\left({\bf y}-{\bf y}^{(k)}\right) +\eta\left({\bf y}_{high}^{(k)}-{\bf y}_{low}^{(k)}\right) $} \FOR{each patch ${\bf p_{y_j}}\in {\bf y}^{(k+1)}$} \STATE{Construct patch matrix ${\bf M_{y_j}}$ using (\ref{eq:MatrixFormofProblem})} \STATE{Estimate residual noise $\hat{\sigma}_{res}^{(k+1)}$ using (\ref{eq:combinedNoiseEstimate})} \STATE{Compute $\left[ U, \Lambda, V\right]= SVD\left({\bf M_{y_j}}\right)$} \STATE{Compute threshold weight vector ${\bf w}$ using (\ref{eq:weightedthreshold})} \STATE{Obtain truncated matrix $\Lambda_{\bf w}$ using (\ref{eq:WNNM_solution})} \STATE{Split $\Lambda_{\bf w}=\Lambda_{\bf w}^{low}+\Lambda_{\bf w}^{high}$ using threshold $\tau$ and (\ref{eq:splitLambda})} \STATE{Obtain the estimates ${\bf M_{y_j}}^{low}$ and ${\bf M_{y_j}}^{high}$ using (\ref{eq:splitPatchMatrixM})} \ENDFOR \STATE{Aggregate ${\bf M_{y_j}}$, ${\bf M_{y_j}}^{low}$ and ${\bf M_{y_j}}^{high}$ to construct images ${\bf y}^{(k+1)}$, ${\bf y}_{low}^{(k+1)}$ and ${\bf y}_{high}^{(k+1)}$, respectively} \ENDFOR \ENSURE{Clean Image ${\bf y}^{(L)}$} \end{algorithmic} \end{algorithm} \section{Experimental Results}\label{sec:results4} We consider eleven frequently used standard images to compare the proposed algorithm (GWNNM) with various state-of-the-art algorithms: BM3D \cite{dabov2007image}, LSSC \cite{mairal2009non}, SAIST \cite{Dongetal_SAIST_6319405}, WNNM \cite{GuWNNM6909762}. We use PSNR values for comparison which are reported in \cite{GuWNNM6909762}. \par We use the same default values of the parameters which were selected in WNNM, heuristically, depending upon the initially given noise level \cite{GuWNNM6909762}. Therefore we skip the details of those parameters due to space limitation. However, we have set the size of search window ($\Delta_w=15\times15$) instead of ($30\times30$) for all the images of size $512\times512$. In fact, block matching process in WNNM is computationally very expensive and in case of $\Delta_w=30\times30$ for $512\times512$ images, the computational cost is much greater than that of $256\times256$ image. Furthermore, it is pointed out in \cite{salmon2010two} that selecting larger size of search window has no major benefit in denoising results except for pseudo-periodic images like fingerprint image. Therefore, for fingerprint image we use the same value of $\Delta_w=30\times30$ as used in WNNM. The setting of new parameter $\alpha$ has already been explained in the previous section. The other two parameters $\eta$ and $\tau$ are experimentally selected as $0.01$ and $0.5$, respectively. \par The comparison of the denoising results is described in Table.~\ref{tb:ComparisonPSNR} and the best results are highlighted using bold faced values. Also, a limited comparison of visual quality for BM3D, WNNM and GWNNM algorithms is shown in Fig.~\ref{fig:house_barbara_n50} at noise level $\sigma_n=50$. As shown in Table.~\ref{tb:ComparisonPSNR} that WNNM and GWNNM perform always better than the rest of the algorithms. For low noise ($\sigma_n=10$) the performance of GWNNM is in general equivalent to WNNM. It is reasonable because the filtered noise (\ref{eq:varianceFilteredNoise}) contains negligible signatures of geometric details and needs only a small correction from geometric estimate (\ref{eq:residualNoise_global}). However, for high noise levels ($\sigma_n\ge30$), GWNNM generally performs better than WNNM. In particular, at $\sigma_n=100$, the improvements in PSNR values are greater than $0.1$ dB for the images of house, jelly beans, peppers, Lena, Barbara and fingerprint. Thus, the improved results, in particular, for geometrically rich and complex images like Barbara, fingerprint, house and Lena, support our assumption that the estimation of residual noise needs modification due to geometric structure of the image. \section{Conclusion}\label{sec:Conc5} We have highlighted the importance of residual noise estimation for low rank optimization to further improve the results. Further we have proposed a method to make the noise estimate more precise by using geometrical structure of a given noisy image. Alternate effective approaches for noise estimation also exist and can be utilized. However, our primary intent was to indicate the deficiency in the existing residual noise estimation scheme which only exploits filtered noise in the previous iteration without considering the geometric details present in the filtered noise. \bibliographystyle{ieeetr}
{ "timestamp": "2015-04-27T02:07:34", "yymm": "1504", "arxiv_id": "1504.03439", "language": "en", "url": "https://arxiv.org/abs/1504.03439" }
\section{Introduction} Black hole mass is a fundamental quasar property, and better measuring poorly known fundamental properties has the potential to lead to breakthroughs. Moreover, correlations between the central black hole mass in galaxies, both active and inactive, and other properties such as the stellar velocity dispersion and bulge luminosity (Marconi \& Hunt 2003; Tremaine et al. 2002; Kormendy \& Ho 2013), may indicate a connection necessary to understanding galaxy evolution more generally. Several methods exist to directly measure or indirectly estimate the central black hole masses of quasars (see Shen 2013 for a review). A primary method of measuring black hole masses is via reverberation mapping (RM, e.g., Peterson 1993). The underlying premise is that broad-line region (BLR) gas moves virially under gravity (Peterson \& Wandel 1999, Onken et al. 2003). This means that BLR gas at a distance R$_{\rm BLR}$ with a velocity $\Delta$V can be used to provide the central mass using an equation of the form: \begin{equation} M_{\rm BH} = f\frac{R_{\rm BLR} \Delta V^2}{G} \end{equation} The factor $f$\ includes our ignorance about geometry (and perhaps other complications), and is obtained by calibration against inactive galaxies (e.g., Onken et al. 2004). RM programs provide the radius R$_{\rm BLR}$ via time lags between the continuum and line flux, while the velocity $\Delta$V is measured from the variable portion of the emission-line profile. Both the Full Width at Half Maximum (FWHM) and the line dispersion $\sigma_{line}$ of the RMS profile have been used, in conjunction with an appropriately calibrated $f$. We will refer to the quantity $R_{\rm BLR} \Delta V^2/G$ as the virial product. Numerous campaigns over more than two decades (e.g. Clavel et al. 1991; Kaspi et al. 2000, 2007; Peterson et al. 2004, Bentz et al. 2009b; Denney et al. 2010) have provided virial products for over 50 objects, which are not necessarily representative of active galaxies in general (e.g., Shen 2013). Because of the large observational resources required for RM, easier methods of estimating black hole masses have been developed, e.g., black hole mass scaling relationships (e.g., Vestergaard 2002). Single-epoch (SE) spectra, rather than RMS spectra, provide line-profile measurements, while the radius-luminosity relationship (e.g., Kaspi et al. 2000; Bentz et al. 2006) provides $R_{\rm BLR}$ rather than a time lag. These two points permit the modification of equation 1 and lead to: \begin{equation} M_{\rm BH} = f\frac{R_{\rm BLR} \Delta V^2}{G} = f\frac{\lambda L_{\lambda}^{\gamma} \Delta V^2}{G}. \end{equation} The exponent $\gamma$\ appears to be consistent with 0.5, at least when the host galaxy contribution to the continuum is accounted for (e.g., Bentz et al. 2009a), as expected for some simple BLR models (e.g., Netzer 1990). In practice, the black hole mass can be calculated from the combination of the unscaled mass $\mu$ and a constant $a$, which provides the scaling factor: \begin{equation} $$\mu = \left(\frac{\Delta V}{{\rm 1000\ km\ s^{-1}}} \right)^2 \left( \frac{\lambda L_{\lambda}}{\rm 10^{44}\ ergs\ s^{-1}} \right)^{\gamma}$$ \end{equation} \begin{equation} $${\rm log}\ M_{\rm BH} = {\rm log}\ \mu + a $$ \end{equation} The scaling relationships are not very precise, unfortunately, and may not be very accurate, either, as we shall demonstrate. Black hole masses calculated with these equations are typically uncertain to factors of 3-4 (Vestergaard \& Peterson 2006, hereafter VP06). Given their utility, needing only a SE spectrum, however, it is desirable to improve existing relationships (in particular for those using ultraviolet lines like C~IV $\lambda$1549, which is redshifted into the optical regime for high-z quasars). Despite possible biases in selection, RM samples have been used to calibrate the SE scaling relationships. The formulations of VP06 and Vestergaard \& Osmer (2009) have been two often-employed examples. Recently Park et al. (2013), using more reliable reverberation masses and myriad small improvements, have updated the C IV-based scaling relationship. There have been a number of concerns noted regarding black hole mass estimates using C~IV (e.g., Croom 2011, Assef et al. 2011, Trakhtenbrot \& Netzer 2012, etc.). The primary criticism is that the FWHM of the C~IV line does not always correlate well with that of H$\beta$ in SE spectra, and that the former is often narrower than the latter, the reverse of the results for the RMS profiles from RM campaigns (e.g., Onken et al. 2004). The SE profile of C~IV probably does not reflect purely virial motions. Denney (2012) compared the reverberating component of C~IV against average profiles, showing evidence for a low-velocity emission-line region that did not vary, consistent with low-velocity C~IV-emitting gas existing on much larger size scales (in accordance with analysis of gravitationally lensed systems, Sluse et al. 2011). These non-varying line cores can be interpreted as the ``intermediate-line region'' or ILR (Wills et al. 1993; Brotherton et al. 1994). Denney also showed that a profile shape parameter ($S = FWHM/\sigma_{line}$) measuring the contamination of the C~IV line by this non-virial component correlates with the differences between C~IV and H$\beta$ mass estimates and could perhaps be used to correct C~IV-derived masses. The degree of non-virial/ILR C~IV contamination also correlates with a suite of properties collectively known as ``Eigenvector 1'' (hereafter EV1; see Boroson and Green 1992; Brotherton \& Francis 1998; Sulentic 2007; etc.). In the ultraviolet part of the spectrum, these properties include the shape of the C~IV profile, as well as certain emission-line ratios and the difference between peak velocities. Shen et al. (2008; see also Shen \& Liu 2012) find a bias in C~IV black hole masses associated with these emission-line velocity shifts. Bian et al. (2012) also note a systematic difference between C~IV and Mg II based black hole masses that related to the equivalent width of C~IV, another EV1 property. \begin{figure} \begin{center} \includegraphics[width=8.9 truecm]{./ew.eps} \end{center} \caption{ For the sample of Runnoe et al. (2013), we plot the log of the continuum-subtracted peak ratio of the $\lambda$1400 feature versus the log of the rest-frame equivalent width of C~IV $\lambda$1549. This shows that the peak ratio is also associated with EV1. } \label{fig:ew} \end{figure} Following up on Wills et al. (1993), Runnoe et al. (2013a, hereafter Paper 1) developed a new C~IV mass correction relying on the log of the ratio of the continuum-subtracted peak heights of the $\lambda$1400 blend (of Si IV and O IV]) and C~IV. Figure 1 shows a significant inverse correlation between the peak ratio and the equivalent width of C~IV, demonstrating that this peak ratio is also part of EV1. Paper 1 provided a mass correction term based on this peak ratio that improves the scatter between C~IV and H$\beta$ derived masses by $\sim$0.1 dex, or about 25\%. There was also a suggestion of a bias in the VP06 C~IV scaling relationship, such that C~IV based masses systematically differed on average from H$\beta$, which could result in part from an EV1 bias in the RM sample that it and other scaling relationships are based upon. Figure 2, showing distributions of a primary optical EV1 parameter, peak $\lambda$5007, for the RM sample of Park et al. (2013) and a complete sample of PG quasars Boroson \& Green 1992), supports this hypothesis. Quasars from Park et al. (2013) tend to have strong [O III] $\lambda$5007 emission, but rarely very weak [O III] $\lambda$5007 emission, compared to the representative, complete sample. \begin{figure} \begin{center} \includegraphics[width=8.9 truecm]{./o3hist.eps} \end{center} \caption{ In the lower panel we plot the histogram of peak [O III] $\lambda$5007, a primary optical EV1 parameter, from the complete Palomar-Green subsample of Boroson \& Green (1992). The top panel shows a histogram (based on estimates from literature figures displaying spectra) for the same parameter for RM objects in the sample of Park et al. (2013), which has been used to calibrate SE C~IV black hole mass scaling relationships. } \label{fig:o3hist} \end{figure} Here we will examine quantitatively how sample biases in EV1 lead to systematic shifts in estimates of black hole mass using C~IV-based scaling relationships, and the implications of this for determining what the best practices should be going forward to pursue better black hole masses. In particular we are concerned about commonly used scaling relationships based on RM samples that, while possessing superior and direct black hole mass measurements, may be biased with respect to EV1. There may be several issues that affect SE C~IV black hole mass scaling relationships, and we use a careful approach to isolate the EV1 bias. It is beneficial to use consistent calibration, measurements, and quasi-simultaneous optical-UV spectra when possible, or risk offsets associated with those issues (e.g., see Denney et al. 2009 and 2013). The presumably improved Park et al. (2013) equation results in C~IV-based black hole masses systematically some 0.25 dex smaller than those of VP06, for instance. Ideally we would use the VP06 and Park et al. (2013) data and measurements, but there are several problems with that approach. First, they did not provide the peak $\lambda$1400/CIV measurements we require to reproduce the Paper 1 analyses. Second, after making our own measurements as described in \S 2, the dispersion of peak $\lambda$1400/CIV in the RM objects is too small to significantly correlate with the difference between the C~IV and H$\beta$ line widths as it does in our paper 1 sample. For these reasons, we will create samples matched to the RM objects with our own data in order to determine how the EV1 bias affects calculated masses. By using our own high-quality data sets and our own measurements, we can avoid inconsistency. In \S 3, we determine black hole mass scaling relationships for subsamples drawn from the data set of Shang et al. (2011), each with different distributions of peak $\lambda$1400/C~IV measurements from Paper 1, and demonstrate the effect on SE scaling relationships. We repeat the same experiment for both nearly complete samples and for a carefully selected sample sample to show the effect of an EV1 bias on existing samples of RM quasars. In \S 4, we discuss near-term and long-term resolutions to this problem, which will allow C~IV to be used as a more effective mass estimator. Finally, \S 5 summarizes our conclusions. \section{Data} \subsection{SED Sample} For our analysis, we use the sample from Paper 1, that of Shang et al. (2011), which includes 85 quasars drawn from several sources. The unifying factor in assembling this composite sample was the existence of quasi-simultaneous spectrophotometry (within weeks or less), obtained in the near-ultraviolet taken with the {\em Hubble Space Telescope (HST)}, and in the optical from ground-based telescopes at Kitt peak National Observatory and McDonald Observatory. One part was based on objects observed by the {\em Far Ultraviolet Spectroscopic Explorer, FUSE} (Moos et al. 2000), extending the spectra into the far-UV (see e.g. Shang et al. 2005). Another subsample consists of radio-loud quasars specifically selected to study orientation effects (see e.g., Wills et al. 1995, Runnoe et al. 2013b). Finally, the last subsample is 22 of 23 objects in the complete ``PGX'' sample (Shang et al. 2007), which may be expected to be largely unbiased with respect to quasar spectral properties. Shang et al. (2011), Tang et al. (2012), and Paper 1 (Runnoe et al. 2013a) provide measurements and determinations of many properties of the quasars in this sample, including peak $\lambda$1400/C~IV that is our ultraviolet EV1 indicator. A valuable property of the sample is that it includes a large range of types of quasars with diverse properties. In particular, the complete PGX sample is useful to make sure parameter space is sufficiently covered and suggestive of what the EV1 properties are of a representative quasar sample. \begin{figure} \begin{center} \includegraphics[width=8.9 truecm]{./ev1hist4.eps} \end{center} \caption{Histograms of peak $\lambda$1400/C~IV for the samples used and discussed in this paper, from bottom to top: the sample of Runnoe et al. (2013), the PGX sample of Shang et al. (2007), our subsample with peak $\lambda$1400/C~IV matched to the Park et al. (2013) reverberation-mapped sample, and finally the reverberation-mapped samples of Park et al. (2013) and VP06. Dotted lines indicate the mean values in these panels (from top to bottom, $-0.74$, $-0.76$, $-$0.77, and $-$0.56). Colors code the bottom histogram according to the log (peak $\lambda$1400/C~IV) ratio: blue ($<$0.9, cyan (from $-0.9$ to $-$0.7), green (from $-1$0.7 to $-$0.5), and red ($> -0.5)$. We will use the same colors in later figures. We draw attention to the PGX sample as being nearly complete, likely representing an unbiased distribution of peak $\lambda$1400/C~IV and other properties. } \label{fig:hist} \end{figure} \subsection{Reverberation-Mapped (RM) Samples} We want to know how possible biases in EV1 in heterogeneous RM samples may affect black hole masses obtained for the C~IV-based scaling relationships derived from them. In order to do this, we need to know their peak $\lambda$1400/C~IV distributions. We use two RM samples that represent the basis for a commonly used C~IV-based black hole mass scaling relationship and its recent update: VP06 and Park et al. (2013). We obtained ultraviolet spectra from the Multimission Archive at Space Telescope (MAST). These come not only from {\em HST} spectrographs, but also the {\em Hopkins Ultraviolet Telescope (HUT),} and the {\em International Ultraviolet Explorer (IUE)}. In a few cases where multiple epochs of spectra were available for an object, we used the one that had the best combination of signal-to-noise ratio and wavelength coverage containing the $\lambda$1400 feature. We fit the $\lambda$1400 feature and C~IV line using a multiple Gaussian and a local continuum following the method of Tang et al. (2012) discussed in Paper 1. Fits were individually inspected to ensure that noise spikes and absorption features did not cause erroneous measurments. We checked our fitting procedure against Paper 1 for the SED sample and found agreement within the uncertainties, as well as overall systematic agreement to better than a few per cent. Our paper 1 approach was conservative, characterizing uncertainties on peak $\lambda$1400/C~IV as $\sim 15\%$. While this ensures measurement consistency between the samples, we noted that measurements of peak $\lambda$1400/C~IV for the same objects observed at different epochs usually agreed within 15\% (e.g., Mrk 335 -- 0.16 vs. 0.15), but not always (e.g., NGC 7469 -- 0.17 vs. 0.10). These differences are due to real, intrinsic variability and serve as a caution to the use of non-simultaneous data. The peak $\lambda$1400/C~IV measurements span an order of magnitude, and that the observed variability is typically much smaller than that, but this is an issue of concern to investigate in the future. We also note that we do not include measurements for all objects in the samples of VP06 and Park et al. (2013). From VP06, we have excluded NGC 3516, NGC 4051, NGC 4151, and NGC 4593 because of absorption in C~IV making a peak flux too difficult to measure reliably. Also excluded is PG 1307+085, for which the $\lambda$1400 feature falls outside the spectral range. This leaves a sample of 22 objects in our VP06 sample. Many of the same objects are used by Park et al. (2013), however, they add PG 0804+761, Mrk 290, and NGC 4593, but exclude Mrk 110, Mrk 79, and NGC 4151. We also exclude NGC 3516, NGC 4051, and PG 1307+085 as we did from VP06 because of absorption or wavelength coverage issues that make a peak flux measurement very uncertain. Our edited Park et al. (2013) sample has 23 objects. Table 1 lists the spectra we measured and our adopted peak $\lambda$1400/C~IV measurements for these two samples. We do not use these peak $\lambda$1400/C~IV measurements directly, only to select a well-matched sample with similar mean and dispersion from our SED sample for a self-consistent analysis. Table 1 includes peak $\lambda$1400/C~IV measurements for a typical simulated RM sample matched to Park et al. (2013). Again, we want to analyze our own objects with our own consistent measurements and avoid any artificial biases associated with differences in measurement methodology. We simulated a dozen RM samples, drawing 23 objects from the parent sample of Paper 1, matching a quasar with log peak $\lambda$1400/C~IV to within 0.1 dex (25\%) of a corresponding object from Park et al. (2013). We selected one of these as our ``simulated RM sample'' on the basis of a very close match in log peak $\lambda$1400/C~IV (both $-$0.76). Our choice for illustrative RM sample also possessed the simulated sample average value of $a$ = 6.87 based on intercept fitting described in $\S$ 3, and was thus representative of the simulated samples and not an outlier. \begin{table*} \centering \begin{minipage}{140mm} \caption{Log peak $\lambda$1400/C~IV for Reverberation-Mapped Samples} \begin{tabular}{@{}lllll@{}} \hline Name & peak $\lambda$1400/C~IV$^a$ & Source (Instrument) & Date & Samples \\ \hline 3C 110 & 0.10 & HST (FOS) & 1995-03-16 & Sim \\ 3C 120 & 0.13 & IUE (SWP) & 1994-02-19,27; 1994-03-11 & P13, VP06 \\ 3C 207 & 0.16 & HST (FOS) & 1991-12-04 & Sim \\ 3C 263 & 0.18 & HST (FOS) & 1991-11-06 & Sim \\ 3C 334 & 0.13 & HST (FOS) & 1991-09-07 & Sim \\ 3C 390.3 & 0.17 & HST (FOS) & 1996-03-31 & P13, VP06 \\ 4C 01.04 & 0.12 & HST (FOS) & 1994-09-11 & Sim \\ 4C 12.40 & 0.16 & HST (FOS) & 1995-02-26 & Sim \\ 4C 41.21 & 0.17 & HST (FOS) & 1992-12-09,10 & Sim \\ 4C 49.22 & 0.15 & HST (FOS) & 1995-05-07 & Sim \\ 4C 73.18 & 0.16 & HST (FOS) & 1994-09-07 & Sim \\ Ark 120 & 0.17 & HST (FOS) & 1995-07-29 & VP06, P13 \\ Fairall 9 & 0.21 & HST (FOS) & 1993-01-22 & P13, VP06 \\ IRAS F07546+3928 & 0.09 & HST (STIS) & 2000-01-28 & Sim \\ Mrk 110 & 0.16 & IUE (SWP) & 1988-02-28,29 & VP06 \\ Mrk 279 & 0.15 & HUT & 1995-03-05,11 & VP06 \\ Mrk 279 & 0.12 & HST (COS) & 2011-06-27 & P13 \\ Mrk 290 & 0.11 & HST (COS) & 2009-10-28 & P13 \\ Mrk 335 & 0.16 & HST (FOS) & 1994-12-16 & VP06 \\ Mrk 335 & 0.15 & HST (COS) & 2009-10-31; 2010-02-08 & P13 \\ Mrk 509 & 0.18 & HST (FOS) & 1992-06-21 & VP06 \\ Mrk 509 & 0.14 & HST (COS) & 2009-12-10,11 & P13 \\ Mrk 590 & 0.21 & IUE (SWP) & 1991 Jan 14 & P13, VP06 \\ Mrk 79 & 0.17 & IUE (SWP) & 1979-11-15 & VP06 \\ Mrk 817 & 0.19 & IUE (SWP) & 1981-11-06,08 & VP06 \\ Mrk 817 & 0.24 & HST (COS) & 2009-08-04; 2009-12-28 & P13 \\ NGC 3783 & 0.16 & HST (FOS) & 1992-07-27 & VP06 \\ NGC 3783 & 0.12 & HST (COS) & 2011-05-26 & P13 \\ NGC 4593 & 0.20 & HST (STIS) & 2002-06-23,24 & P13 \\ NGC 5548 & 0.11 & HST (FOS) & 1993-04-26 & VP06 \\ NGC 5548 & 0.09 & HST (COS) & 2011-6-16,17 & P13 \\ NGC 7469 & 0.17 & HST (FOS) & 1996-06-18 & VP06 \\ NGC 7469 & 0.10 & HST (COS) & 2010-10-16 & P13 \\ OS 562 & 0.19 & HST (FOS) & 1992-08-11 & Sim \\ PG 0026+129 & 0.16 & HST (FOS) & 1994-11-27 & VP06, P13 \\ PG 0052+251 & 0.17 & HST (FOS) & 1993-07-22 & VP06, P13 \\ PG 0804+761 & 0.31 & HST (COS) & 2010-06-12 & P13 \\ PG 0844+349 & 0.35 & HST (STIS) & 1999-10-21 & Sim \\ PG 0953+414 & 0.17 & HST (FOS) & 1991-06-18 & VP06, P13 \\ PG 1100+772 & 0.14 & HST (FOS) & 1993-02-03 & Sim \\ PG 1114+445 & 0.20 & HST (FOS) & 1996-05-13 & Sim\\ PG 1226+023$^b$ & 0.35 & HST (FOS) & 1991-01-14,15,16 & VP06, P13, Sim \\ PG 1229+204 & 0.23 & IUE (SWP) & 1982-05-01 & VP06, P13 \\ PG 1411+442 & 0.33 & HST (FOS) & 1992-10-03 & Sim \\ PG 1425+267 & 0.14 & HST (FOS) & 1996-06-29 & Sim \\ PG 1426+015 & 0.16 & IUE (SWP) & 1985-03-01,02 & VP06, P13 \\ PG 1440+356 & 0.30 & HST (FOS) & 1996-12-05 & Sim \\ PG 1512+370 & 0.09 & HST (FOS) & 1992-01-26 & Sim \\ PG 1613+658 & 0.33 & IUE (SWP) & 1991-02-25 & VP06 \\ PG 1613+658 & 0.26 & HST (COS) & 2010-04-08,09,10 & P13 \\ PG 1626+554 & 0.23 & HST (FOS) & 1996-11-19 & Sim \\ PG 2130+099 & 0.23 & HST (HRS) & 1995-07-24 & VP06 \\ PG 2130+099 & 0.20 & HST (COS) & 2010-10-28 & P13 \\ PG 2214+139 & 0.21 & HST (STIS) & 2000-06-19 & Sim \\ PG 2251+113 & 0.17 & HST (FOS) & 1991-10-23 & Sim \\ PKS 2216$-$03 & 0.25 & HST (FOS) & 1992-08-27,28 & Sim \\ \hline \end{tabular} $^a$\ As discussed in the text, fitting errors range up to about $15\%$ (see also Runnoe et al. 2013), but measurements made at different epochs can differ more than this, reflecting a larger intrinsic scatter. $^b$\ The spectrum used by VP06 and P13 does not include the $\lambda$1400 feature, so we have used the measurement from Runnoe et al. (2013) for all samples. \end{minipage} \end{table*} Figure 3 displays histograms of log (peak $\lambda$1400/C~IV) for the samples of Paper 1, the PGX subsample (Shang et al. 2007), our simulated RM sample, and the VP06 and Park et al. (2013) RM samples edited as described above. Log distributions of this quantity, like some other EV1 properties such as the line ratio of [O III]/Fe II, appear more normally distributed than the linear values. We also note here that the peak $\lambda$1400/C~IV measurements from the RM samples do not correlate with the differences between the C~IV and H$\beta$ line widths as they do for our sample as seen in Paper I. The lack of correlation is not inconsistent with our previous findings, but rather reflects the relatively narrow range in peak $\lambda$1400/C~IV values. If the RM samples were unbiased, they would be expected to have mean values and distributions more closely resembling that of the PGX sample, which shows a wider range of peak $\lambda$1400/C~IV values shifted to larger values in the histograms of Figure 3. \section{Analysis} Using our data from Paper 1, we first demonstrate quantitatively how changing the EV1 distribution systematically changes the intercept of the basic C~IV black hole scaling relationships. Second, we compare the intercepts obtained for our nearly complete PGX subsample and our simulated RM subsample to determine the approximate size of the EV1 bias in black hole masses based on C~IV scaling relationships from VP06 and Park et al. (2013). We conduct our analysis using both FWHM and $\sigma_{line}$. \subsection{Ultraviolet EV1 and the Intercept Bias} We first explore the effect of a bias in EV1, using different samples with systematically different peak ($\lambda$1400/C~IV), on the intercept of the C~IV scaling relationship. To do this we employ the formalism of VP06 with minor modifications. We compute the unscaled mass $\mu$ from equation 3, explicitly setting the luminosity exponent $\gamma = 0.5$. Since our goal is to focus only on the effect of EV1 distributions on mass offsets, the problem is simplified if we can fix as many parameters as possible, and there is currently no evidence that the radius-luminosity relationship depends on EV1. We use both FWHM$_{C~IV}$ or $\sigma_{C IV}$\ in place of $\Delta V$ in separate fits. Figure 4 plots black hole mass as determined by the H$\beta$\ scaling relationship from VP06 on the Y-axis, and $\mu$ on the X-axis for both measures of $\Delta V$ using the same data and measurements as Paper 1. Symbols are coded for different ranges of peak ($\lambda$1400/C~IV) as indicated, and a typical error bar is shown for an H$\beta$ mass uncertainty of 0.4 dex (VP06) and 0.06 dex for peak ($\lambda$1400/C~IV). Strong correlations are present, consistent with Paper 1, VP06, and Park et al. (2013). Furthermore it is clear that the different subsamples are segregated, which we can quantify by determining the required intercepts to add to the unscaled mass to predict the H$\beta$-based mass. \begin{figure*} \begin{minipage}[!b]{8cm} \centering \includegraphics[width=8cm]{./fwhmvp.eps} \end{minipage}\hspace{0.6cm} \hspace{0.6cm} \begin{minipage}[!b]{8cm} \centering \includegraphics[width=8cm]{./sigmavp.eps} \end{minipage} \hspace{0.6cm} \caption{H$\beta$-based single-epoch black hole masses calculated using the equation of VP06 (and tabulated by Tang et al. 2012) versus FWHM-based C~IV virial products labeled $\mu$ (left panel) for the sample of Runnoe et al. (2013a). Symbols are color-coded and differently shaped according to their value of log peak $\lambda$1400/C~IV as in Figure 3. A conservative error bar is shown in the lower right of the figure. The right panel shows the same plot, except using a $\sigma_{line}$-based virial product. } \label{fig:vp} \end{figure*} We used the robust least-squares fitting program GaussFit (Jefferys et al. 2013) to find a line minimizing the differences between the two sides of equation 4, allowing only the zero-point offset $a$\ to vary. We assumed typical uncertainties for all quantities, which includes $M_{\rm BH}$ estimates (0.43 dex, e.g. VP06) and on parameters used to compute $\mu$ (10$\%$ on $\Delta$V and 3\% on continuum luminosity -- see Paper 1 and Shang et al. 2011). The fitting procedure provided the optimal offset $a$ and a corresponding 1 $\sigma$ uncertainty. The fitting results are given in Table 2 and show a strong and systematic trend: as the sample mean of peak ($\lambda$1400/C~IV) increases, the intercept $a$ decreases when using either FWHM or $\sigma_{line}$ of C~IV. Again using GaussFit, we computed a best-fit line to determine these relationships, for which we have assumed the standard error in the mean for the uncertainty on the log peak ($\lambda$1400/C~IV) term for each subsample, and the tabulated uncertainties on $a$. These fits are shown in Figure 5 and can be written for FWHM-based C~IV masses: \begin{equation} $$a = (-1.18 \pm 0.05)( {\rm log\ peak}\ \lambda 1400/ {\rm C~IV}) + (6.01 \pm 0.04)$$ \end{equation} and for $\sigma_{line}$-based C~IV masses: \begin{equation} $$a = (-0.80 \pm 0.12) ( {\rm log\ peak}\ \lambda 1400/ {\rm C~IV}) + (6.33 \pm 0.09)$$ \end{equation} We note that the agreement in the slope reported in Paper 1, in the case of the FWHM-based masses, is good, $-1.18\pm0.05$ compared with $-1.227\pm0.136$ (their eq. 3). In the case of $\sigma_{line}$-based C~IV masses, the slope is flatter but there is less consistency, $-0.80\pm0.12$ here compared to $-0.220\pm0.068$ (their eq. 4). As before, we conclude this ultraviolet EV1 correction works well for FWHM-based scaling relationships, but also may be weaker when applied to $\sigma_{line}$-based scaling relationships. As discussed in Paper 1, this may be the result of FWHM being more sensitive to the EV1 variation, in the form of a non-reverberating C~IV ILR component, than $\sigma_{line}$. \begin{figure*} \begin{minipage}[!b]{8cm} \centering \includegraphics[width=8cm]{./fwhmline2.eps} \end{minipage}\hspace{0.6cm} \hspace{0.6cm} \begin{minipage}[!b]{8cm} \centering \includegraphics[width=8cm]{./sigmaline2.eps} \end{minipage} \hspace{0.6cm} \caption{ The fitted-line intercept $a$\ versus log peak $\lambda$1400/C~IV for the four binned samples, with symbols and colors to match previous figures, for the FWHM-based (left panel) and $\sigma_{line}$-based virial products. The errorbars displayed represent 1 $\sigma$ errors on a and the standard error of the mean for the log(peak $\lambda$1400/C IV) subsamples. We note the typical fitting uncertainties on $a$\ quoted by VP06 are less than 0.02 dex, much smaller than the effect illustrated here. }\label{fig:linefits} \end{figure*} \begin{table*} \centering \begin{minipage}{140mm} \caption{C~IV Mass Scaling Relationships: EV1 Subsamples and Zero Points} \begin{tabular}{@{}ccccc@{}} \hline log peak $\lambda$1400/C~IV Range & $<$log Peak $\lambda$1400/C~IV$>$ $\pm$ SEMean & N & FWHM Zero Point $a$ & $\sigma_{line}$ Zero Point $a$ \\ \hline $-0.90$ to $-1.10$ & $-1.00 \pm 0.04$ & 12 & $7.20 \pm 0.07$ & $7.14 \pm 0.08$ \\ $-0.70$ to $-0.90$ & $-0.81 \pm 0.01$ & 32 & $6.95 \pm 0.07$ & $6.95 \pm 0.07$ \\ $-0.50$ to $-0.70$ & $-0.61 \pm 0.02$ & 13 & $6.76 \pm 0.09$ & $6.75 \pm 0.06$ \\ $-0.10$ to $-0.50$ & $-0.35 \pm 0.04$ & 11 & $6.42 \pm 0.14$ & $6.65 \pm 0.14$ \\ \hline \end{tabular} \end{minipage} \end{table*} \subsection{PGX and Reverberation-Mapped Samples} We used the above equations along with the mean values of log peak $\lambda$1400/C~IV for the VP06 and Park et al. (2013) RM samples to estimate their zero-point offset and compare them to those of the nearly complete PGX subsample in order to compute the bias. Perhaps better, however, is to fit $a$\ directly as in equation 4 for our simulated RM sample. In any case, both approaches give similar results. See Table 3. \begin{table*} \centering \begin{minipage}{140mm} \caption{Zero Points for PGX and Simulated RM Samples} \begin{tabular}{@{}ccccc@{}} \hline Sample & $<$log peak $\lambda$1400/C~IV$>$ $\pm$ SEMean& N & FWHM Zero Point $a$ & $\sigma_{line}$ Zero Point $a$ \\ \hline Simulated RM & $-0.76 \pm 0.03$ & 23 & $6.87\pm0.07$ & $6.92\pm0.07$ \\ PGX & $ -0.62\pm0.05 $ & 22 & $6.68\pm0.10$ & $6.69\pm0.07$ \\ \hline \end{tabular} \end{minipage} \end{table*} The mean value of log peak $\lambda$1400/C~IV is $-$0.62 in the PGX sample, and $-0.76$\ with a tighter distribution in the simulated RM sample. This latter value is consistent with the average value of $-$0.76 computed for the values of our edited Park et al. (2013) sample. Taking into account that our PGX subsample is not totally complete and only 22 objects, and that we have had to edit the VP06 and Park et al. (2013) samples slightly, which are each only 23 objects, and that matching those samples can only be done approximately, we prefer to be quantitatively conservative. The zero-point offset $a$\ in the FWHM-based C~IV black hole scaling relationship derived using RM samples differs from that obtained for the PGX sample, and will yield masses that are systematically high by almost $\sim$0.2 dex or $\sim$50\%. There is a similar bias for $\sigma_{line}$-based C~IV masses, which is smaller but on the same order. \section{Discussion} We are now entering an era of identifying first and second-order corrections to SE quasar black hole scaling relationships. Real line profiles are complex and do not seem to always represent gas moving in a purely virial manner. Moreover, accuracy and precision are both important, and efforts must be made not only to reduce scatter in scaling relationships, but also to get the zero point right. Specifically, which measurements should be made to yield the best black hole masses? For instance, is the choice of measuring $\Delta$V using FWHM or $\sigma_{line}$ preferred? A number of articles have suggested that the latter may be a better choice (e.g. Collin et al. 2006). We have shown here $\sigma_{line}$-based masses also suffer an EV1 bias. The $\sigma_{line}$-based masses can be also biased when measured using spectra with low signal-to-noise (SNR) ratios (e.g. Denney et al 2013). Thus FWHM, being a more robust measurement for typical lower SNR quasar spectra from the Sloan Digital Sky Survey (SDSS, e.g., Shen et al. 2011) is likely preferred in that case. Working on FWHM-based scaling relationships is therefore valuable even if $\sigma_{line}$ may be a less biased measurement in high-SNR spectra. \subsection{Current and Future Bias Corrections} If an investigator wants the best black hole mass with only rest-frame ultraviolet spectra of low or moderate SNR, what is the best course to take? If the object is in the SDSS Data Release 7, they can adopt the Shen et al. (2012) values already given to them, based on the VP06 prescription, but increase the value by 0.2 dex or 50\% to take into account the typical EV1 bias. If they want to improve on that average EV1 correction, they can make additional measurements of the peak heights of the $\lambda$1400 feature and C~IV and apply equation 3 of Paper 1. This is also a reasonable course for updating any C~IV based masses already in use that were computed using VP06 formulations. This is probably not ideal, however, given the changes between the work of VP06 and that of Park et al. (2013), who made many small improvements that together led to significantly different C~IV scaling relationships. An average correction for EV1 bias can still be applied, increasing masses by $\sim$50\%. If measurements of the peaks of $\lambda$1400 feature and C~IV can be made, then a better, individualized correction is possible: \begin{equation} M_{\rm BH} = M_{\rm BH}({\rm FWHM_{C~IV}, P13}) - 1.23\ {\rm log\ peak}\ \frac{\lambda 1400}{\rm C IV} - 0.91 \end{equation} The above equation assumes the Paper 1 slope on the EV1 term and a RM sample mean of log peak $\lambda$1400/C~IV of $-$0.76. While we have provided a second decimal place on the numbers in the equation, we recommend rounding results to only the first decimal place. We suggest the Paper 1 slope ($-1.23$ rather than $-1.18$) because the previous analysis does not bin data which can have a small effect on the fitted slope, and note that the larger uncertainty is the result of conservative errors on individual points compared to the actual scatter in the data points that we used for the subsamples. This equation also assumes that our EV1 correction does not change for higher luminosity, higher redshift quasars, but this should be explicitly and quantitatively investigated. The correction could change if the ``Baldwin effect'' (Baldwin 1977), the inverse correlation between emission line equivalent width and continuum luminosity, differs between C~IV and the $\lambda$1400 feature, or if it differs between the contaminating ILR component and the rest of the C~IV emission line. Shang et al. (2003) found that the narrower line core of C~IV correlated with luminosity in the sense of the Baldwin effect, and therefore a dependence on luminosity seems plausible, if not likely. Furthermore, at higher redshifts whatever unknown property or properties driving EV1 could differ from those in our lower redshift samples, or differ in their manifestation. \subsection{Implications for Current and Future RM Campaigns} It is not necessarily necessary to give up using RM samples to derive black hole mass scaling relationships. In principle such samples have the most reliable black hole masses and it is desirable to use them when possible. Ideally, these samples can be expanded to include a larger range in EV1, particularly objects at the narrow-line Seyfert 1 (NLS1) end of the trend at high values of peak $\lambda$1400/C~IV, that are currently underrepresented, at least in the case of also having rest-frame ultraviolet spectra covering the C~IV emission line. New ultraviolet {\em HST} spectroscopy of NLS1s that have been reverberation mapped (e.g., Du et al. 2014) would be quite useful in expanding the sample of RM quasars with a wider range of EV1 properties. RM campaigns should, more generally, endeavour to include the full range of all types of broad-lined AGNs in order to avoid systemic biases of all types, and not just target the easiest and most cooperative objects. We must be concerned about some NLS1s, however, given suspicions that the presence of extreme high-ionization winds emitting C~IV might prevent accurate mass estimation (e.g., Vestergaard 2011). Additionally, the currently existing EV1 bias could result from a physical effect, that NLS1s may not vary as strongly as other broad-lined Seyfert Galaxies and quasars (e.g., Ai et al. 2013), making them more challenging targets for reverberation campaigns. Another potential issue is that EV1 properties may result in biases of not only the C~IV derived masses, but of H$\beta$ derived masses. Du et al. (2014) find that two of their NLS1s with extremely high accretion rates have time lags significantly shorter than expected based on existing R$_{\rm BLR}$ - L relationships, which may lead to systematic overestimation of black hole masses for similar objects using existing H$\beta$ scaling relationships. A statistically larger number of reverberation-mapped AGNs is required to more fully investigate this result. Despite these caveats, there is reason for optimism as improvements are clear and more can be expected in the future given these and other approaches. Of particular interest is the new, large reverberation mapping effort described by Shen et al. (2015), which promises to deliver dozens of new broad-line time lags for a homogeneously selected sample of quasars likely including the full range in EV1. Quasar black hole mass determinations will improve, and RM efforts are still needed to drive improvements. \section{Summary} We have explicitly demonstrated how samples biased in UV probes EV1 produce biased black hole mass scaling relationships. We have also determined quantitatively the size of the effect and its general consistent with our previous work. We have shown how to make both average and individualized corrections to black hole masses estimated using the C~IV line based on equations derived from reverberation-mapped samples, which are popular but heterogeneous and biased in EV1. They, on average, lead to mass estimates that are $\sim$0.2 dex or about 50\% too high. \section*{Acknowledgments} We would like to thank Kelly Denney for helpful discussions during the preparation of this work. M. S. Brotherton acknowledges support through the Space Telescope Science Institute, AURA through Grant HST-AR-13237.01-A. Z. Shang acknowledges support by the National Natural Science Foundation of China through Grant No. 10773006 and Tianjin Distinguished Professor Funds.
{ "timestamp": "2015-04-15T02:05:09", "yymm": "1504", "arxiv_id": "1504.03427", "language": "en", "url": "https://arxiv.org/abs/1504.03427" }
\section{} Let $R$ be an associative ring with identity. By a module and an $R$-module we mean a left unital $R$-module. A submodule $A$ of an $R$-module $B$ is called a \emph{pure submodule} \cite{Sten} if for any commutative diagram with $K$ a finitely generated submodule of a free $R$-module $F$ $$\begin{tikzcd} K \arrow[hook]{r}\arrow{d}{} &F\arrow{d}{} \arrow[dashed]{ld}\\ A\arrow[hook]{r}&B \end{tikzcd}$$ there is a map $F \rightarrow A$ making the upper triangle commute. A module $A$ is called \emph{absolutely pure} \cite{Mdx} if it is pure in every module containing it as a submodule or equivalently if $A$ is pure in some injective module or $A$ is pure in its injective envelope \cite{M}. Our goal is to study modules that are pure in their quasi injective envelopes. Of course, if the module is pure in every quasi injective module then it must be absolutely pure. So we give a weaker form of purity, \emph{self purity}, and study modules that are self pure in every module containing them as submodules. This turns out to be equivalent to saying that the module is self pure in some quasi injective module or self pure in its quasi injective envelope. The quasi injective envelope of an $R$-module $M$ is denoted $Q(M)$. By $\Omega(M)$ we mean the set of all left ideals $L$ of $R$ such that $L \supseteq \ann (m)$ for some $m \in M$. The filter generated by $\Omega(M)$ in the lattice of left ideals of $R$ is denoted $\overline{\Omega}(M)$. Recall that L. Fuchs \cite{Fuchs} has proved that a module $M$ is quasi injective if and only if for any homomorphism $f$ from a left ideal $L$ of $R$ whose kernel is in $\overline{\Omega}(M)$ there is an extension to a homomorphism $R \rightarrow M$ of $f$. This gives a generalization of Baer condition for injective modules. On the other hand, absolutely pure modules $A$ are characterized by the property that for any finitely generated submodule $K$ of a free module $F$, any homomorphism $K \rightarrow A$ can be extended to a homomorphism $F \rightarrow A$ \cite{M}. Generalizing the above facts, we say that a module $A$ is absolutely self pure if any map from a finitely generated left ideal of $R$ into $A$ whose kernel is in $\overline{\Omega}(A)$ can be extended to a map $R \rightarrow A$. Regular and left noetherian rings are characterized using properties of absolutely self pure modules. \section{} \begin{definition} A submodule $A$ of an $R$-module $B$ is called a \emph{self pure} submodule of $B$ (denoted $A \leq^{sp} B$) if the following condition holds: For any finitely generated left ideal $L$ of $R$ and any map $f : L \rightarrow A$ with $\ker f \in \overline{\Omega}(A)$, if there is a map $R \rightarrow B$ making the following diagram commutative: $$\begin{tikzcd} L \arrow[hook]{r}\arrow{d}{f} &R\arrow{d} \arrow[dashed]{ld}\\ A\arrow[hook]{r}&B \end{tikzcd}$$ then there exists a map $ R \rightarrow A$ making the upper triangle commutative. \end{definition} Any pure submodule of a module is self pure, but not conversely, for one may take any quasi injective module that is not absolutely pure, which must clearly be self pure in its injective envelope, but of course not pure. More generally, one can study purity with respect to another module $M$. Precisely, we say that a submodule $A$ of an $R$-module $B$ is called $M$-\emph{pure} submodule of $B$ if the following condition holds: For any finitely generated left ideal $L$ of $R$ and any map $f : L \rightarrow A$ with $\ker f \in \overline{\Omega}(M)$, if there is a map $R \rightarrow B$ making the above diagram commutative then there exists a map $ R \rightarrow A$ making the upper triangle commutative. Therefore, $A$ is $A$-pure in $B$ exactly when $A$ is self pure in $B$. However we will restrict our attention to the concept of self purity. \begin{proposition} \label{sptrans} If $A$, $B$ and $C$ are $R$-modules such that $A \leq^{sp} B$ and $B \leq^{sp} C$ then $A \leq^{sp} C$. \begin{proof} Consider the following diagram $$\begin{tikzcd} L \arrow[hook]{r}\arrow{d}{f} &R\arrow{d} \\ A\arrow[hook]{r}&C \end{tikzcd}$$ where $L$ is a finitely generated left ideal of $R$ with $\ker f \in \overline{\Omega}(A)$ and $R \rightarrow C$ is a map making the diagram commutative. Then obviously, we can consider $f$ as a map $L \rightarrow B$ for which we have a commutative diagram: $$\begin{tikzcd} L \arrow[hook]{r}\arrow{d}{f} &R\arrow{d} \\ B\arrow[hook]{r}&C \end{tikzcd}$$ Since $B \leq^{sp} C$, there is an extension $g : R \rightarrow B$ of $f$, so that the following square commutes: $$\begin{tikzcd} L \arrow[hook]{r}\arrow{d}{f} &R\arrow{d}{g} \\ A\arrow[hook]{r}&B \end{tikzcd}$$ and since $A \leq^{sp} B$, there is an $h : R \rightarrow A$ extending $f$. \end{proof} \end{proposition} \begin{proposition} \label{spantitrans} If $A \subseteq B \subseteq C$ are $R$-modules such that $A \leq^{sp} C$ then $A \leq^{sp} B$. \begin{proof} Consider the following commutative diagram with the obvious maps: $$\begin{tikzcd} L \arrow[hook]{r}\arrow{d}{f} &R\arrow{d} &\\ A\arrow[hook]{r}&B \arrow[hook]{r}&C \end{tikzcd}$$ Since $A \leq^{sp} C$, there is a $g : R \rightarrow A$ extending $f$. \end{proof} \end{proposition} \begin{definition} A module is called \emph{absolutely self pure} if it is self pure in every module containing it as a submodule. \end{definition} It is clear that absolutely pure modules and quasi injective modules are examples of absolutely self pure modules. If a module $A$ is self pure in some quasi injective module then by Proposition \ref{spantitrans}, it must be self pure in its quasi injective envelope $Q(A)$. So if $A$ is contained in some other module $B$, then as $Q(A) \leq^{sp} Q(B)$ we must have by Propositions \ref{sptrans} and \ref{spantitrans} that $A \leq^{sp} B$. Therefore, a module is absolutely self pure if and only if it is self pure in some quasi injective module if and only if it is self pure in its quasi injective envelope. \begin{theorem} \label{absspurechar} A module $A$ is absolutely self pure if and only if for each finitely generated left ideal $L$ of $R$ and each map $f : L \rightarrow A$ with $\ker f \in \overline{\Omega}(A)$ there is an extension map $ R \rightarrow A$ of $f$. \begin{proof} Consider the following commutative diagram: $$\begin{tikzcd} L \arrow[hook]{r}\arrow{d}{f} &R\arrow{d}{g} &\\ A\arrow[hook]{r}&Q(A) \end{tikzcd}$$ where $Q(A)$ denotes the quasi injective envelope of $A$. Existence of $g$ is guaranteed be quasi injectivity of $Q(A)$. Now $A$ is self pure in $Q(A)$ if and only if $f$ can be extended to a map $R \rightarrow A$. \end{proof} \end{theorem} By Proposition \ref{sptrans}, a self pure submodule of an absolutely self pure module is again absolutely self pure. In particular, direct summands of absolutely self pure modules are absolutely self pure. \begin{theorem} A module $A$ is absolutely self pure if and only if any direct sum of copies of $A$ is absolutely self pure. \begin{proof} Suppose that $A$ is absolutely self pure and let $f$ be a map from a finitely generated left ideal $L$ of $R$ into $A^{(I)}$ for some index set $I$ such that $\ker f \in \overline{\Omega}(A^{(I)})$, i.e. $\ker f \supseteq \bigcap_{j \in J} \ann ((a_i)_j)$ for finite $J$. But the non-zero $a_i$'s in each $(a_i)_j$ are also finite. Therefore $\ker f$ contains the intersection of annihilators of a finite set of individual $a_i$'s and hence it belongs to $\overline{\Omega}(A)$. Let $\{l_1, \cdots, l_n\}$ be a generating set for $L$. Each of the images $f(l_i)$ is of finite support, so $f$ can be considered as a map $L \rightarrow A^{(K)}$ for some finite subset $K$ of $I$ and hence it is the finite direct sum of coordinate maps $f_k$ into each factor $A$. Clearly $\ker f_k \supseteq \ker f$ and so $\ker f_k \in \overline{\Omega}(A)$ for each coordinate map $f_k$. By absolute self purity of $A$, each $f_k$ is extendable to a $g_k : R \rightarrow A$. The map whose coordinate maps are the $g_k$'s is the desired extension of $f$. The other implication is clear. \end{proof} \end{theorem} Left noetherian rings are precisely rings over which every absolutely pure module is (quasi) injective \cite[Theorem 3]{M} and \cite[Theorem 8]{(Flat)modules}. It is clear that over such rings, the concepts of absolutely self pure and that of quasi injective modules are equivalent and by \cite[Theorem 8]{(Flat)modules} the converse is also true. So we have the following characterization of left noetherian rings: \begin{theorem} \label{noethchar} A ring $R$ is left noetherian if and only if any absolutely self pure $R$-module is quasi injective. \qed \end{theorem} Recall that a ring $R$ is called regular if every principal left ideal of $R$ is a direct summand. Over a regular ring, all modules are absolutely pure and hence all modules are absolutely self pure. The converse is also true: \begin{theorem} \label{regchar} A ring $R$ is regular if and only if every left $R$-module is absolutely self pure. \begin{proof} ($ \Rightarrow $) Clear. ($ \Leftarrow $) Given a principal left ideal $L$ of $R$, extend the identity map of $L$ to the map $L \rightarrow L \oplus R$ defined by $a \mapsto (a,0)$. This map clearly has kernel containing $\ann (0,1)$. By assumption $L \oplus R$ is absolutely self pure, hence there is a $g : R \rightarrow L \oplus R$ extending $f$. Follow $g$ by the projection $L \oplus R \rightarrow L$ to get an extension of the identity map $L \rightarrow L$, which shows that $L$ is a direct summand of $R$. \end{proof} \end{theorem} Combining Theorems \ref{noethchar} and \ref{regchar} we get the well-known characterization that $R$ is semisimple (= regular and left noetherian) if and only if every $R$-module is quasi injective. By Theorem \ref{noethchar}, over the ring of integers $\mathbb Z$ the absolutely self pure modules are exactly the quasi injective ones. So any quasi injective abelian group that is not injective serves as an example of an absolutely self pure module which is not absolutely pure. Also by Theorems \ref{regchar} and \ref{noethchar}, if a ring $R$ is regular but not left noetherian then there must exist an absolutely self pure $R$-module that is not quasi injective. Another example is the following. \begin{example} We know that if $R$ is a regular but not a left noetherian ring then there must exist an absolutely pure $R$-module $A$ that is not quasi injective. Over the ring $\mathbb Z$ the module $\mathbb Z_2$ is quasi injective but not absolutely pure. Now let $R$ and $A$ be as above and consider the module $M=\left( {\begin{array}{c}A \\ \mathbb Z_2 \\ \end{array} } \right)$ over the ring $S=\left( {\begin{array}{cc} R & 0 \\ 0 & \mathbb Z \\ \end{array} } \right)$ The module $M$ is not quasi injective (nor absolutely pure), for otherwise the direct summand $\left( {\begin{array}{c}A \\ 0 \\ \end{array} } \right)$ would be quasi injective (the direct summand $\left( {\begin{array}{c}0 \\ \mathbb Z_2 \\ \end{array} } \right)$ would be absolutely pure). Now we proceed to show that $M$ is absolutely self pure. Any finitely generated left ideal of $S$ is of the form $K=\left( {\begin{array}{cc} I & 0 \\ 0 & J \\ \end{array} } \right)$, where $I$ (respectively $J$) is a finitely generated left ideal of $R$ (respectively $\mathbb Z$). Any map $K \rightarrow M$ is of the form $\left({\begin{array}{cc} f &0\\ 0&g \\ \end{array} } \right)$ where $f \in \Hom_R(I,A)$ and $g \in \Hom_{\mathbb Z}(J, \mathbb Z_2)$. To extend $\left({\begin{array}{cc} f &0\\ 0&g \\ \end{array} } \right)$ to $S$ is to extend both $f$ to $R$ and $g$ to $\mathbb Z$. The former is always possible because $I$ is a finitely generated left ideal of a regular ring, hence it is a direct summand. So we focus on extending $g$ to a map $\mathbb Z \rightarrow \mathbb Z_2$. But if $\ker (K \rightarrow M)$ contains the intersection of $\ann \left( {\begin{array}{c}a_i \\ z_i \\ \end{array} } \right)$ for some $\left( {\begin{array}{c}a_i \\ z_i \\ \end{array} } \right) \in M$, $i=1, \cdots, n$, which must be equal to $\left( {\begin{array}{cc} \cap \ann (a_i) & 0 \\ 0&\cap \ann (z_i) \\ \end{array} } \right)$ then we need only to focus on $\cap \ann (z_i)$. This latter is equal to $2\mathbb Z$ or $\mathbb Z$ according to whether one of the $z_i$'s is $1$ or not. In any case we must have that $2 \mathbb Z \subseteq \ker (g:J \rightarrow \mathbb Z_2)$. Hence either $J = \mathbb Z$ and we are done or $J = 2\mathbb Z$ and therefore $g$ is the zero map. \end{example} A note about modules that are pure in their quasi injective envelopes is in order. Let us call them \emph{absolutely quasi pure} modules. It is obvious that this concept lies between quasi injectivity and absolute self purity. It is not clear whether every absolutely self pure module is absolutely quasi pure. Of course over regular rings the two concepts are equivalent, and over left noetherian rings they are also equivalent to quasi injectivity. If a module $A$ is absolutely quasi pure then so is any finite direct sum of copies of $A$. \bibliographystyle{amsplain}
{ "timestamp": "2015-04-15T02:01:02", "yymm": "1504", "arxiv_id": "1504.03352", "language": "en", "url": "https://arxiv.org/abs/1504.03352" }
\section{Introduction} With the installation of Wide Field Camera 3 (WFC3) on the {\em Hubble Space Telescope (HST)} in 2009 it is now possible to identify statistically useful and robust samples of star forming galaxies in the early Universe \citep[$z>4$,][]{Oesch2010,Bouwens2010b,Bunker2010,Wilkins2010,Finkelstein2010,McLure2010,Wilkins2011a,Lorenzoni2011,Bouwens2011,McLure2011,Finkelstein2012b,Lorenzoni2013,McLure2013,Duncan2014,Finkelstein2014}. In recent years a tremendous effort has been dedicated to quantifying the photometric and physical properties, such as star formation rates and stellar masses, of these galaxies. As we continue to dig deeper, with the first sources now identified at $z \approx 10$ \citep[e.g.][]{Oesch2012a,Ellis2013}, and with the launch of the {\em James Webb Space Telescope (JWST)} in the next few years, we will further be able to constrain the physics of galaxy formation and evolution in this critical epoch of the Universe's history. Although it lasts less than 0.8\,Gyr, the period of the Universe between $z=7$ and $z=4$ is important to study because it defines an epoch of interesting galaxy formation and evolution activity. The start of this period marks the end of the epoch of reionization; galaxies are starting to ramp up their metal and dust production; and we are finding evidence of the first quasars. While astronomy is unique in allowing us to observe the Universe at these early times, theoretical modelling is required to interpret those observations in terms of an evolving galaxy population. The rapidly advancing observational constraints on the physical properties of galaxies in the early Universe provides an opportunity to further test and refine these galaxy formation models. The most well studied property of the galaxy population at high-redshift (in part due to its accessibility) is the rest-frame ultraviolet (UV) luminosity function (LF). Because of the link between the UV luminosity of galaxies and their star-formation rates, the observed UV\,LF provides an important constraint on star-formation activity in the early Universe. While early observational results were based on only small samples \citep{Bouwens2008,Bouwens2010a,Bouwens2010b,Bunker2010,Oesch2009,Oesch2010,Ouchi2009,Wilkins2011b,Robertson2010,Dunlop2010,Lorenzoni2011}, we have slowly begun building larger catalogues, first with $200-500$ galaxies \citep{Finkelstein2010, Bouwens2011, McLure2013}, with the most recent observations having almost 1000 galaxies at $z\geq7$ \citep{Bouwens2015, Finkelstein2014}. While the intrinsic UV luminosity is known to be a useful diagnostic of star-formation activity \citep[e.g.~][]{Wilkins2012}, it is susceptible to even small amounts of dust ($A_{\rm UV}\approx 10\times E(B-V)$). Direct comparison of the observed UV luminosity function with predictions from galaxy formation models is then sensitive to the reliability of the dust model (which has to account for the creation and destruction of dust, and its effect on the intrinsic spectral energy distribution). Whilst challenging, it is observationally possible to constrain the dust obscuration and thus determine the true (or intrinsic) star formation activity, even in distant galaxies. Starlight that is absorbed by dust is reprocessed and emitted in the rest-frame mid/far-IR. Combining the star-formation rate inferred from the observed UV with that inferred from the mid/far-IR emission then provides a robust constraint on the total (or intrinsic) star-formation activity. Observational constraints on the rest-frame mid/far-IR emission in high-redshift galaxies are, however, challenging due to the significantly lower flux sensitivity and poorer spatial resolution of facilities operating at these wavelengths. Thus far there is only a single galaxy individually detected in the far-IR at $z>6$ \citep{Riechers2013}. This is, however, likely to rapidly improve with the completion of the {\em Atacama Large Millimetre Array (ALMA)}. One alternative to using far-IR/sub-mm observations is to take advantage of the relationship between the rest-frame UV continuum slope $\beta$, which is easily accessible even at $z\sim 10$ (Wilkins et al. {\em submitted}) and the UV attenuation \citep[first applied by][]{Meurer1999}. The measurement of $\beta$ in high-redshift galaxies has, in recent years, been the focus of intense study \citep[e.g.][]{Stanway2005,Bouwens2009,Bunker2010,Bouwens2010a,Wilkins2011b,Dunlop2012a,Bouwens2012,Finkelstein2012a,Castellano2012,Rogers2013,Wilkins2013,Bouwens2014}. Measurements of the UV continuum slope have been used to effectively correct the observed UV luminosity function and thus determine the star-formation-rate distribution function \citep[e.g.][]{Smit2012}. It is important to note, however, that this relation is sensitive to a number of assumptions \citep[see][]{Wilkins2012, Wilkins2013} which introduce both systematic biases and increase the scatter in individual observations. By combining space (from {\em Hubble}) and ground-based near-IR observations ($< 2 \mu $m) from the {\em Infra-red Array Camera (IRAC)} aboard the {\em Spitzer Space Telescope} it is possible to probe the rest-frame to optical spectral energy distributions (SEDs) of galaxies at high redshift. This is critical to deriving robust stellar masses and thus the galaxy stellar mass function (GSMF). The measurement of stellar masses at high-redshift is, unfortunately, affected by various issues, including: the low sensitivity of the {\em IRAC} observations; assumptions regarding the star formation and metal enrichment history of these galaxies; and the effects of strong nebular emission \citep[e.g.][]{Wilkins2013}. Despite these obstacles, several groups have now attempted to measure the galaxy SMF in the high-redshift Universe \citep[e.g.][]{Stark2009, Labbe2010, Gonzalez2011, Yan2012, Duncan2014} permitting a direct comparison with galaxy formation models. The Munich semi-analytic model of galaxy formation \citep[latest version][]{Henriques2014}, also known as {\sc L-Galaxies}, has had a lot of success over the past decade in predicting various properties of galaxies, such as the stellar-mass and luminosity functions both in the local Universe and out to redshift $z=3$ \citep{Henriques2013}. In this paper we extend these predictions to higher redshift without altering any of the model parameters (except to modify the redshift-dependence of the dust model, as described in \S2.2.1 below). In that sense, the results presented here may be considered predictions of the model. \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{./figures/smf_lowz.eps} \centering \caption{Predicted stellar mass functions at redshift $z \approx 0$ (top left); $z \approx 1$ (top right); $z \approx 2$ (lower left) and $z \approx 3$ (lower right). Solid black lines show the stellar mass functions predicted by our model. This figure is reproduced from \protect\cite{Henriques2014} Figures 2 and A1, and we direct the reader there for a more detailed description. We include it here to highlight how well the model works at lower redshifts in predicting key observables such as the SMF. Observations are taken from several surveys; SDSS \protect\citep{Baldry2008,Li2009} and GAMA \protect\citep{Baldry2012} at z=0; and \protect\cite{Marchesini2009}, Spitzer-COSMOS \protect\citep{Ilbert2010}, NEWFIRM \protect\citep{Marchesini2010}, COSMOS \protect\citep{Sanchez2011}, ULTRAVISTA \protect\citep{Muzzin2013, Ilbert2013} and ZFOURGE \protect\citep{Tomczak2014} at higher redshifts.} \label{fig:lowzsmf} \end{center} \end{figure*} This paper is structured as follows: in Section 2 we describe the relevant parts of our semi-analytical model, highlighting the changes in the latest version; in Section 3 we discuss our high redshift predictions for the star-formation-rate distribution function and UV luminosity function, followed by the stellar mass function in Section 4. In Section 5 we discuss the relationship between the specific star formation rate and the stellar mass and in Section 6 we give a brief overview of the evolution of these properties at high-redshift. We conclude our work in Section 7. Throughout this paper we adopt a Chabrier initial mass function \citep{Chabrier2003} and use the latest {\it Planck} cosmology \citep{PlanckCollaboration2014}. Number densities are presented per co-moving volume, ($h^{-1}$\,Mpc)$^3$. This paper makes use of detailed predictions from the new model of {\sc L-Galaxies} outlined in \cite{Henriques2014}, which have been made publicly available.\footnote{http://gavo.mpa-garching.mpg.de/MyMillennium/} The binned data used to make the plots in this paper have also been made available online.\footnote{http://astronomy.sussex.ac.uk/$\sim$sc558/} \section[]{The Model} \subsection{{\sc L-Galaxies}} Semi-analytic models (SAMs) provide a relatively inexpensive method of self-consistently evolving the baryonic components associated with dark matter merger trees, derived from $N$-body simulations or Press-Schechter calculations. The term semi-analytic comes from the use of coupled differential equations (rather than numerical calculations), to follow the evolution of galaxy formation physics determining the properties of gas and stars. Physics commonly found in most SAMs include descriptions of: (1) primordial infall and the impact of an ionizing UV background; (2) Radiative cooling of the gas; (3) Star Formation recipes; (4) Metal enrichment; (5) Super-massive black hole growth; (6) Supernovae and AGN feedback processes; (7) The impact of environment and mergers including galaxy morphologies and quenching. The Munich SAM, or {\sc L-Galaxies}, \citep{Springel2001,DeLucia2004, Springel2005, Croton2006,DeLucia2007, Guo2011, Guo2013,Henriques2013} has been developed over the years to include most of the relevant processes that affect galaxy evolution. In this work we use its latest version, \cite{Henriques2014}, and direct the reader to the appendix of that paper for a detailed description of the model. Of most relevance to this paper are the adoption of the {\sc Planck} year 1 cosmology and a modified gas-to-dust relation, partly motivated by the work presented in this paper. The model parameters were constrained using the abundance and passive fractions of galaxies at $z\leq3$; and the model has successfully reproduced key observables at these redshifts, such as the luminosity and stellar mass functions. We highlight this fact in Figure~\ref{fig:lowzsmf}, which is a reproduction of the SMF at $z\in\{0,1,2,3\}$ from \cite{Henriques2014}. We direct the reader to Figures 2 and A1 and the related text of that paper for a more detailed discussion, but we highlight how well the model can explain the observed evolution in the SMF at these redshifts, over the mass range constrained by observers. \subsection{Dust Extinction Model} \label{sec:msamdust} Actively star-forming galaxies are known to be rich in dust. This can have a dramatic effect on their emitted spectrum since dust significantly absorbs optical/UV light while having a much milder effect at longer wavelengths. As a result, dust-dominated galaxies will generally have red colours even if they are strongly star-forming. For that reason, we summarise the dust model of \citet{Henriques2014} here: a fuller description can be found in Section~1.14 of the supplementary material in that paper. We considering dust extinction separately for the diffuse interstellar medium (ISM) and for the molecular birth clouds (BC) within which stars form. The optical depth of dust as a function of wavelength is computed separately for each component and then combined as described below. We do not at present attempt to compute the detailed properties of the dust particles or the re-emission of the absorbed light. \subsubsection{Extinction by the ISM} \label{sec:msamismdust} The optical depth of diffuse dust in galactic disks is assumed to vary with wavelength as \begin{align} \label{eq:msamextinctionism} \nonumber \tau_{\lambda}^{ISM}= & (1+z)^{-1}\left(\frac{A_{\lambda}}{A_\mathrm{V}}\right)_{Z_{\odot}} \left(\frac{Z_{\rm{gas}}}{Z_{\odot}}\right)^s \\ & \times \left(\frac{\langle N_H\rangle}{2.1 \times10^{21}{\rm{atoms}}\,{\rm{cm}}^{-2}}\right), \end{align} where \begin{equation} \label{eq:msamhcolumndensity} \langle N_H\rangle=\frac{M_{\rm{cold}}}{1.4\,m_p\pi (a R_{\rm{gas,d}})^2} \end{equation} is the mean column density of hydrogen. Here $R_{\rm{gas,d}}$ is the cold gas disk scale-length, $1.4$ accounts for the presence of helium and $a=1.68$ in order for $\langle N_H\rangle$ to represent the mass-weighted average column density of an exponential disk. Following the results in \citet{Guiderdoni1987}, the extinction curve in eq.~(\ref{eq:msamextinctionism}) depends on the gas metallicity and is based on an interpolation between the Solar Neighbourhood and the Large and Small Magellanic Clouds: $s=1.35$ for $\lambda<2000$ \AA $\:$ and $s=1.6$ for $\lambda>2000$ \AA. The extinction curve for solar metallicity, $(A_{\lambda}/A_\mathrm{V})_{Z_{\odot}}$, is taken from \citet{Mathis1983}. The redshift dependence in eq.~(\ref{eq:msamextinctionism}) is significantly stronger than in previous versions of our model ($(1+z)^{-0.5}$ in \citet{Kitzbichler2007} and $(1+z)^{-0.4}$ in \citet{Guo2009}). The dependence implies that for the same amount of cold gas and the same metal abundance, there is less dust at high redshift. The motivation comes both from observations \citep{Steidel2004, Quadri2008} and from the assumption that dust is produced by relatively long-lived stars. However, it may also be that this redshift dependence has to be introduced as a phenomenological compensation for the excessively early build-up of the metal content in model galaxies. In practice it has been included simply to give an approximate match to the low extinctions of high-redshift galaxies as inferred from their observed UV slopes \citep{Bouwens2012}, and to the UV luminosity function, as described below. \subsubsection{Extinction by molecular birth clouds} \label{sec:msammoleculardust} This second source of extinction affects only young stars that are still embedded in their molecular birth clouds, for which we assume a lifetime of 10\,Myr. The relevant optical depth is taken to be \begin{equation} \label{eq:msamextinctionclouds} \tau_{\lambda}^{BC}=\tau_{\lambda}^{\mathrm{ISM}}\left(\frac{1}{\mu}-1\right) \left(\frac{\lambda}{5500\mathrm{\AA}}\right)^{-0.7}, \end{equation} where $\mu$ is given by a random Gaussian deviate with mean 0.3 and standard deviation 0.2, truncated at 0.1 and 1. \subsubsection{Overall extinction curve} \label{sec:msamdustgeometry} In order to get the final overall extinction, every galaxy is assigned an inclination, $\theta$, given by the angle between the disk angular momentum and the $z$-direction of the simulation box, and a ``slab'' geometry is assumed for the dust in the diffuse ISM. For sources that are uniformly distributed within the disk then the mean absorption coefficient is \begin{equation} \label{eq:msamextinctionlambda} A_{\lambda}^\mathrm{ISM}=-2.5\log_{10}\left(\frac{1-\exp^{-\tau_{\lambda}^\mathrm{ISM}\sec{\theta}}} {\tau_{\lambda}^\mathrm{ISM}\sec{\theta}}\right), \end{equation} Emission from young stars embedded within birth clouds is subject to an additional extinction of \begin{equation} \label{eq:msamextinctionlambda2} A_{\lambda}^\mathrm{BC}=-2.5\log_{10}\left(\exp^{-\tau_{\lambda}^\mathrm{BC}}\right). \end{equation} The standard {\sc L-Galaxies} output does not attempt to model the attenuation of light by the intergalactic medium. However, this is done in post-processing for the lightcones published in the Millennium Run Observatory\footnote{The Millennium Run Observatory, or MRObs, allows you to observe our semi-analytic galaxy formation model through the use of `virtual telescopes'.}\citep{Overzier2013}. In this paper, however, we neglect intergalactic attenuation. \section{Recent Star Formation} \label{sec:sfr} In this section we investigate the star formation rate, and the related UV luminosity function, at redshifts $z\in\{4,5,6,7\}$. Figure~\ref{fig:sfr} shows the star-formation-rate distribution function (SFR\,DF) as predicted by our model alongside measurements from \cite{Smit2012} (converted to our fiducial Chabrier IMF) and \cite{Duncan2014}. Comparing with the \cite{Smit2012} measurements at redshifts $z\approx 5-7$ we find generally good agreement. At these redshifts, the \cite{Duncan2014} measurements are generally higher than both the model and the \cite{Smit2012} results. This is particularly true for the most massive galaxies, though we note that the quoted observational uncertainties can be very large. At $z\approx4$, however, our model under-predicts the number of galaxies for $\log_{10}($SFR/$h^{-2}\mbox{M$_\odot$})<1$ when compared to both sets of observations (which are consistent with one another at this redshift). The cause of the discrepancy is unclear, though may be a consequence of our model under-estimating the contribution to the SFR from merger-driven activity. \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{./figures/sfr.eps} \centering \caption{Predicted star formation rate distribution functions at redshift $z \approx 4$ (top left); $z \approx 5$ (top right); $z \approx 6$ (lower left) and $z \approx 7$ (lower right). In each instance we use the closest available snapshot from our {\sc L-Galaxies} run of z=3.95, 5.03, 5.82 and 6.97 respectively. Solid black lines show the star formation rate distribution function predicted by our model. Our $z=4$ star formation rate distribution function is repeated at higher redshifts as a grey dot-dash line for comparison. Observations are taken from \protect\cite{Smit2012}, converted to a Chabrier IMF, and \protect\cite{Duncan2014}.} \label{fig:sfr} \end{center} \end{figure*} \subsection{The UV Luminosity Function} \label{sec:uvlf} We present the UV luminosity function predicted by our model in Figure \ref{fig:uvlf} alongside recent observational estimations at high-redshift \citep{Bouwens2015,Duncan2014,Finkelstein2014,Bowler2014a,Bowler2014b}. The solid black line shows our prediction for the attenuated UV luminosity function; the attenuated UV LF at $z=4$ is also shown on subsequent plots for comparison. The dashed line shows our intrinsic UV luminosity function, with no dust model being applied. \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{./figures/uvlf.eps} \centering \caption{Predicted rest--frame (1500\,\AA) UV luminosity functions at redshift $z \approx 4$ (top left); $z \approx 5$ (top right); $z \approx 6$ (lower left) and $z \approx 7$ (lower right). In each instance we use the closest snapshot available to use from our {\sc L-Galaxies} run of z=3.95, 5.03, 5.82 and 6.97 respectively. Solid black lines shows the {\sc L-Galaxies} prediction for the attenuated UV luminosity function using the dust extinction model outlined in \S2.2.1. The dashed black line is the {\sc L-Galaxies} prediction of the intrinsic UV luminosity function, with no dust model applied. Our $z=4$ attenuated UV luminosity function is repeated at higher redshifts as a grey dot-dash line for comparison. Observations are taken from \protect\cite{Bouwens2015}, \protect\cite{Duncan2014} and \protect\cite{Finkelstein2014}, and at high mass from \protect\cite{Bowler2014a} ($z=6$) and \protect\cite{Bowler2014b} ($z=7$).} \label{fig:uvlf} \end{center} \end{figure*} We find a good fit to the faint number counts: $M_\mathrm{UV}>-20$ for $z=4-6$ and $M_\mathrm{UV}>-19$ for $z=7$. At brighter absolute magnitudes, the model counts fall below the observed ones. Note, however, that the raw counts, before dust attenuation, lie above the observations. Given that we saw a good fit in Section~\ref{sec:sfr} between predicted and observed SFRs, then this points to a difference in the dust model between the two. To better understand this, we quantify in Figure~\ref{fig:dustatt} the attenuation required (as a function of the intrinsic UV absolute magnitude) to reconcile the raw {\sc L-Galaxies} data with observations. We do this by comparing observed, $M_{\Phi,\mathrm{obs}}$, and intrinsic, $M_{\Phi,\mathrm{int}}$, absolute magnitudes below which we achieve a particular cumulative number density, $\Phi$ of galaxies: \begin{equation} \Phi = \int_{-\infty}^{M_\Phi}\phi\,\mathrm{d}M, \label{eq:Phi} \end{equation} where $\phi$ is the usual differential number density of galaxies. The attenuation is then $A_\mathrm{UV}=M_{\Phi,\mathrm{obs}}-M_{\Phi,\mathrm{int}}$. \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{./figures/dustatt.eps} \centering \caption{This figure shows the amount of dust attenuation required to move our intrinsic UV luminosity function (the dashed, black lines in Figure~\protect\ref{fig:uvlf}) to match different observational data sets \protect\citep{Bouwens2015, Duncan2014, Finkelstein2014}, as a function of unattenuated absolute UV magnitude. The solid, black line shows the attenuation built into the {\sc L-Galaxies} model as described in Section~\ref{sec:msamdust}.} \label{fig:dustatt} \end{center} \end{figure*} The dust attenuation required to match the observation (as a function of the intrinsic absolute magnitude) is shown in Figure~\ref{fig:dustatt}. The black, solid line shows the attenuation currently implemented in {\sc L-galaxies}, as described in Section~\ref{sec:msamdust}. As expected, the built-in attenuation matches that from the \citet{Duncan2014} data fairly well. The other data sets show a shallower slope: the attenuation is reasonable, perhaps even under-estimated in the faintest galaxies, but is strongly over-estimated in the brightest galaxies and increasingly so at high redshift. It is important to stress that while we are presenting the results for all the objects within our simulation, observational samples \citep[such as those employed by][]{Bouwens2015, Duncan2014, Finkelstein2014}, are biased and may not truly capture the full galaxy population at these redshifts. Indeed, a defining characteristic of the Lyman break technique, which is regularly used to identify galaxies in the high redshift universe, is that it preferentially selects blue rest-frame UV bright sources, i.e star forming galaxies with low UV dust attenuation $(\rm{A_{UV}}<2$). Very dusty galaxies, or those with little to no star formation would then be missed in typical Lyman break galaxy searches \citep[e.g. HFLS3, a very dusty intensely star forming galaxy at $z\approx 6.3$][]{Riechers2013}. The degree to which this is a concern at high-redshift is difficult to assess, largely due to the lack of sensitive far-IR and sub-mm imaging which is critical to identify heavily obscured systems. Given the current observational uncertainties, we conclude that the simple, empirical dust extinction model currently built into {\sc L-galaxies} does a reasonable job, although it could be refined to match particular data sets if required. In the future, we intend to implement a more physically-motivated dust model: we note that the current model has prompt recycling, and this could be an issue at these early times when the age of the Universe is just 1.5\,Gyr at $z=4$ and less than 1\,Gyr for $z>6$. A delayed chemical enrichment model has been implemented in {\sc L-Galaxies} by \citet{Yates2013} and we intend to incorporate that into the \citet{Henriques2014} model in future work. \section{Galaxy Stellar Mass Function} We present the Galaxy Stellar Mass Function (GSMF) at $z\in\{4,5,6,7\}$ predicted by our model in Figure \ref{fig:smf} alongside recent observational estimates at high-redshift from \cite{Gonzalez2011} and \cite{Duncan2014}. \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{./figures/smf_highz.eps} \centering \caption{Predicted stellar mass functions at redshift $z \approx 4$ (top left); $z \approx 5$ (top right); $z \approx 6$ (lower left) and $z \approx 7$ (lower right). In each instance we use the closest snapshot available to use from our {\sc L-Galaxies} run of z=3.95, 5.03, 5.82 and 6.97 respectively. Solid black lines show the stellar mass functions predicted by our model. To indicate the possible effect of uncertainties in the observational stellar mass determinations, we also show as a red dot-dash line the stellar mass function convolved with a gaussian of standard deviation 0.3\,dex. Our $z=4$ stellar mass function is repeated at higher redshifts as a grey dot-dash line for comparison. Observations are taken from \protect\cite{Gonzalez2011}, converted to a Chabrier IMF, and \protect\cite{Duncan2014}.} \label{fig:smf} \end{center} \end{figure*} It is important to first note that the observationally-derived mass functions presented in Figure \ref{fig:smf} are inconsistent with each other at $z\sim 4-5$. One possible source (see \cite{Duncan2014} for a wider discussion) of this discrepancy is the effect of nebular emission which was included in \cite{Duncan2014} but not in \cite{Gonzalez2011}. Galaxies in the high-redshift Universe are expected \citep{Wilkins2013} and inferred \citep[e.g.][]{Smit2014} to exhibit strong nebular emission which can strongly affect the measured stellar mass-to-light ratios and thus masses \citep{Wilkins2013}. The accuracy/precision of stellar mass estimates are also affected by the lower sensitivity and angular resolution of the {\em Spitzer}/IRAC imaging. Given the above observational uncertainties, it is gratifying that the model predictions split the two observational measurements at $z=4$. There is a hint that the change in slope at the ``knee'' of the mass-function ($M_\mathrm{knee}\approx3\times10^9h^{-2}$\,\mbox{M$_\odot$}) may be sharper in the model than the observations, but the observational error bars are growing at this point and so it is hard to draw firm conclusions. As we move to higher redshifts, however, the model predictions and the observations gradually diverge as follows: (i) the normalisation at $M_\mathrm{knee}$ declines more rapidly with increasing redshift in the models than in the observations; (ii) the slope of the mass function above the knee is steeper in the models than in the observations. \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{./figures/ssfr.eps} \centering \caption{Predicted specific star formation rates ($sSFR = SFR/M$) at redshift $z \approx 4$ (top left); $z \approx 5$ (top right); $z \approx 6$ (lower left) and $z \approx 7$ (lower right). In each instance we use the closest snapshot available to use from our {\sc L-Galaxies} run of z=3.95, 5.03, 5.82 and 6.97 respectively. The histogram density plot represents the {\sc L-Galaxies} galaxy population, with white representing the most dense, and blue representing the least. The solid line shows the {\sc L-Galaxies} median values, and the dashed lines show the 0.16 and 0.84 percentiles. The observations are taken from \protect\cite{salmon2015}, the points denotes the median while the error bars reflect the scatter in the observed values (not the uncertainty on the median).} \label{fig:ssfr} \end{center} \end{figure*} The exact cause of these discrepancies is difficult to assess. One possibility is that it reflects a deficiency in the model; on the other hand it may reflect a systematic bias in the observations. This has been discussed at low redshift ($z=0-3$) in Appendix~C of \citet{Henriques2013}. It seems probable that the uncertainties on the individual stellar masses could have been underestimated, and that can strongly boost the inferred number of galaxies in regions where where the mass function is particularly steep. As an example of the possible magnitude of this effect, we show in Figure~\ref{fig:smf} the result of convolving with a gaussian of standard deviation 0.3\,dex, similar to that required at low redshift. This largely reconciles the observed and predicted slopes of the mass function, but the normalisation remains too low at $z=7$. Understanding the source of this discrepancy is a focus of an additional work in progress (Wilkins et al. {\em in-prep}). Recent hydrodynamic simulations, particularly {\sc Illustris} \citep{Vogelsberger2014} and {\sc Eagle} \citep{Schaye2015}, have begun making predictions of observables at high redshift \citep{Genel2014, Furlong2014}. Like {\sc L-Galaxies}, both {\sc Illustris} and {\sc Eagle} make predictions at high redshift by only using observational constraints at lower redshift. Both simulations are similar to ours in the prediction of the GSMF at $z=6-7$ in that we all under predict the abundance of high mass galaxies ($>10^9 \mbox{M$_\odot$}$) at these redshifts, although {\sc Eagle} better match the observations at $z=5$ across the entire mass range. Whilst both {\sc L-Galaxies} and {\sc Eagle} match a similar shape to the observations, particularly finding good agreement with the slope and abundance for low mass galaxies, {\sc Illustris} predicts a slope that steepens with increasing redshift faster than what is observed, and over predicts the abundance of low mass galaxies at all redshifts. \section{Specific Star Formation Rate} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{./figures/evolution.eps} \centering \caption{Plot to show the evolution of the stellar mass (top left); star formation rate (top right); UV luminosity function (lower left); and specific star formation rate (lower right) in the redshift range $z= 4 - 7$. In each instance we use the closest snapshot available to use from our {\sc L-Galaxies} run of z=3.95, 5.03, 5.82 and 6.97 respectively.} \label{fig:evolution} \end{center} \end{figure*} The specific star formation rate (sSFR) is a measure of how quickly a galaxy is forming its stars. We present the sSFRs at $z\in\{4,5,6,7\}$ of our galaxy population predicted by our model in Figure \ref{fig:ssfr} alongside recent observational measurements from \cite{salmon2015}. We represent the sSFR of individual galaxies by a 2D histogram; the solid line shows the median value predicted by our model, averaged over bins of 100 or more galaxies. The observations are consistent with our model, particularly for galaxies of mass $M\approx 10^9$\,\mbox{M$_\odot$}, across all redshifts. However the observations show a decline in the sSFR with increasing galactic stellar mass and we do not identify the same trend. Instead, all galaxies in our model have roughly the same level of activity, regardless of galactic stellar mass. This discrepancy is not surprising: given that the models match the observed SFR but under-predict the stellar masses of the largest galaxies, then we would expect this result. The question remains as to whether the observations or the model is at fault, or a combination of both. We could boost AGN feedback in the most massive galaxies in the model, but this would then reduce the bright end of the UV\,LF. Alternatively, as hinted in the previous section, the inferred masses of the highest-mass galaxies may have been boosted by observational scatter. At lower masses, there will be an observational bias towards the brightest galaxies, and so the median sSFR may be over-estimated. \section{Evolution} To make it easier to see how the properties of our model galaxies change with time, we extract the model predictions from each of the four redshifts shown in Figures~\ref{fig:sfr}, \ref{fig:uvlf}, \ref{fig:smf} \& \ref{fig:ssfr} and display them in single panels in Figure~\ref{fig:evolution}. Concentrating first on the SFR (upper-left panel), we see the the knee of the distribution remains relatively unchanged, at about $20h^{-2}$\mbox{M$_\odot$}\,yr$^{-1}$ over this period. However, the normalisation of the relation grows and the slope decreases, such that the number comoving number density of galaxies with star-formation rates of $0.3h^{-2}$\mbox{M$_\odot$}\,yr$^{-1}$ is approxiately constant, while that of higher star-formation rates in excess of $100h^{-2}$\mbox{M$_\odot$}\,yr$^{-1}$ grows by several orders of magnitude. As might be expected, a similar, but less pronounced, trend is seen in the UV\,LF, although the knee of the distribution is harder to discern. In constrast to the SFR\,DF, the galactic SM\,DF shows only a slight reduction in slope from $z=7$ to $z=4$. Consequently, the comoving number density of low-mass galaxies increases by about 1\,dex over this time. This is reflected in the specfic star-formation rate, which reduces by a factor of about 0.5\,dex in the same period (as the sSFR is approximately independent of mass, this conclusion holds for individual galaxis, not just the population).\footnote{The age of the Universe roughly doubles over this period; thus the sSFR measured in terms of this age shows much less variation and even at $z=4$ is sufficient to double the mass of a galaxy in less than a quarter of the age at that time.} \section{Conclusions} We have presented the latest high-redshift observational predictions of the star-formation-rate distribution function (SFR\,DF); UV luminosity function (UV\,LF); galactic stellar mass function (GSMF) and specific star-formation rates (sSFRs) of galaxies from the latest version of the {\sc L-Galaxies} semi-analytic model \citep{Henriques2014}. Our conclusions are as follows: \begin{enumerate}[(i)] \item We find a good fit to both the shape and normalization of the observed SFR\,DF at $z=4-7$ (Figure~\ref{fig:sfr}), apart from a slight under-prediction at the low SFR end at $z=4$, possibly caused by a lack of SFR contribution from merger-driven activity in our model. \item We find a good fit to the faint number counts for the observed UV\,LF (Figure~\ref{fig:uvlf}). At brighter magnitudes, our predictions lie below the observations, increasingly so at higher redshifts. \item At all redshifts and magnitudes, the raw (unattenuated) number counts for the UV\,LF lie above the observations, and so we interpret our under-prediction as an over-estimate of the amount of dust in the model for the brightest galaxies, especially at high-redshift (Figure~\ref{fig:dustatt}). \item While the shape of our SMF matches that of the observations, we lie between the observations at $z=4-5$ and under-predict at $z=6-7$ (Figure~\ref{fig:smf}). We note, however, that both sets of observations are inconsistent with one another, and have, at times, large errors attached to them. \item The sSFRs of our model galaxies (Figure~\ref{fig:ssfr}) show the observed trend of increasing normalisation with redshift, but do not reproduce the observed mass dependence, indicating instead that galaxies of all masses the same level of activity. It is unclear as to whether this is caused by observational bias, or by an under-estimate of AGN feedback in the model. \end{enumerate} In summary, the {\sc L-Galaxies} model has mixed success in reproducing observations at high redshift. It provides a reasonable match to both the SFR\,DF and the low-mass end of the SMF, but fails to show the observed mass-dependence of the sSFR. The predicted UV\,LF is highly-dependent upon an ad-hoc scaling with redshift of the dust model. In \cite{Yates2013} we added a detailed model of the chemical enrichment in {\sc L-Galaxies} by adding a delayed enrichment from stellar winds and supernovae, as well as metallicity-dependent yields and the tracking of eleven heavy elements (including O, Mg and Fe). This shows promising results in reproducing the mass-metallicity relation at $z=0$, although the chemical enrichment at high-redshift remains untested and is something we will look at in the future. That then will provide a more realistic prediction of the metallicity of galaxies at early times. In future work, we will also add a physically-motivated model for dust growth and destruction, and consider the effect of extinction from the inter-galactic medium. \section*{Acknowledgments} The authors would like to thank the anonymous referee for valuable comments and suggestions that helped us to improve this paper. We would also like to thank Kenneth Duncan, Brett Salmon and Renske Smit for providing us with their observational data, in some cases in advance of publication, and Chaichalit Srisawat and David Sullivan for their advice and assistance. The authors contributed in the following way to this paper. SJC undertook the vast majority of the data analysis and produced the figures; he also provided a first draft of the paper. PAT \& SMW jointly supervised SJC and led the interpretation of the results. BMBH provided expertise on the interpretation of the {\sc L-Galaxies} model. The model data used in this paper was generated on the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility ({\tt www.dirac.ac.uk}). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grant ST/H008519/1, and STFC DiRAC Operations grant ST/K003267/1 and Durham University. Much of the data analysis was undertaken on the {\sc Apollo} cluster at Sussex University. SJC acknowledges the support of his PhD studentship from the Science and Technology Facilities Council (STFC). PAT \& SMW acknowledge support from the Science and Technology Facilities Council (grant number ST/L000652/1). BMBH was supported by Advanced Grant 246797 ``GALFORMOD'' from the European Research Council. \bibliographystyle{mn2e}
{ "timestamp": "2015-06-30T02:19:13", "yymm": "1504", "arxiv_id": "1504.03321", "language": "en", "url": "https://arxiv.org/abs/1504.03321" }
\section{Introduction} \label{sec:introduction} The term ``luminous blue variable'' (LBV) was used by Conti (\cite{con84}) for the first time and referred to hot luminous massive variable stars that are evolved, but are not Wolf-Rayet (WR) stars. Nowadays, LBVs, also known as S Doradus variables, are considered to be massive evolved stars mainly characterized by a) high luminosity, $\sim10^{6}\ \mathrm{L}_{\odot}$; b) photometric variability with amplitudes from $\sim$0.1 mag (small oscillations) up to $\geq 2\ \mathrm{mag}$ (giant eruptions); and c) high mass-loss rates $\sim10^{-5} - 10^ {-4}\ \mathrm{M}_{\odot}\ \mathrm{yr}^{-1}$ (Humphreys \& Davidson \cite{hum94}). Their location on the Hertzsprung- Russell (HR) diagram is in the upper left part although some of them undergo occasional excursions to the right. An early-type O star with initial mass $\ge 30\ \mathrm{M}_{\odot}$ evolves to a WR star by losing a significant fraction of its initial mass. Luminous blue variables represent a short stage ($\sim10^{4} - 10^{5}\ \mathrm{yr}$) in this evolutionary path according to current evolutionary scenarios (Maeder \& Meynet \cite{maed10}). Although stellar winds can be responsible for stellar mass-loss, the mass-loss rates of O stars have been revised downward in the past few years (Bouret et al. \cite{bouret}; Fullerton et al. \cite{fullerton}; Puls et al. \cite{puls}). Consequently, episodes of extreme mass-loss during an intermediate evolutionary phase, like a LBV or a red supergiant (RSG) phase, are now thought to play a key role, which is why the study of the LBVs and their circumstellar environments is crucial for understanding massive star evolution. Such extreme mass-loss leads to the formation of ejected nebulae, which have been observed around many LBVs (Hutsem\'{e}kers \cite{hut94}; Nota et al. \cite{nota95}). They are classified into three categories according to their morphology: shell nebulae, filamentary nebulae and peculiar morphologies (Nota et al. \cite{nota95}). The study of these circumstellar environments can reveal the mass-loss history of the central star since they are formed by the material that has been ejected from the central star in a previous evolutionary phase. Dust and molecular gas (CO) have been revealed by infrared and millimeter studies of LBV nebulae (McGregor et al. \cite{mcgr88}; Hutsem\'{e}kers \cite{hut97}; Nota et al. \cite{nota02}). Some LBVs are surrounded by more than one nebulae. This is the case of the LBV G79.29+0.46. Near-infrared and millimeter data analyzed by Jim\'{e}nez-Esteban et al. (\cite{jim10}) revealed multiple shells around this star. Infrared observations by the $\textit{Herschel}$ Space Observatory (Pilbratt et al. \cite{pilbratt}) revealed a second nebula around the LBV WRAY 15-751 (Vamvatira-Nakou et al. \cite{vamv13}) AG Car (=HD 94910 =IRAS 10541-6011) is a well-studied prototypical LBV. Its variability was first discovered by Wood (\cite{wood14}). It was first classified as a P Cygni star by Cannon (\cite{can16}) and finally classified as a LBV by Humphreys (\cite{hum89_1}). Numerous studies show that this star exhibits photometric and spectroscopic variability. In the optical V-band, the photometric changes during the S Dor cycle are about 2 mag on a timescale of 5-10 years (Stahl \cite{sta86_a}; van Genderen et al. \cite{van88}; Leitherer et al. \cite{lei92}; Sterken et al. \cite{ster96}). In addition, smaller variations of 0.1-0.5 mag on a timescale of about 1 year were discovered (van Genderen et al. \cite{van97}). During the periods of visual minimum, AG Car has a spectrum of a WR star, with Ofpe/WN9 spectral type according to Stahl (\cite{sta86_a}) and with WN11 according to a more recent study by Smith et al. (\cite{smith94}). During the periods of visual maximum, AG Car's spectrum corresponds to that of an early-A hypergiant (Wolf \& Stahl \cite{wol82}, Stahl et al. \cite{sta01}). Humphreys et al. (\cite{hum89_2}) concluded that the distance to AG Car is 6 $\pm$ 1 kpc, based on the calculated kinematic distance and on the observed variation of the interstellar extinction with distance. This result was confirmed by Hoekzema et al. (\cite{hoek92}), based again on the extinction versus distance relation. Stahl et al. (\cite{sta01}) suggested a slightly lower distance of 5-6 kpc based on their calculations of the heliocentric systemic velocity of AG Car (10 $\pm$ 5 km s$^{-1}$), which is compatible with the value of Humphreys et al. (\cite{hum89_2}) considering the errors. Groh et al. (\cite{gro09}) calculated a similar systemic velocity. Consequently, the value of 6 $\pm$ 1 kpc that encompasses all measurements is adopted for all calculations in this study. Lamers et al. (\cite{lam89}) calculated $\log L/L_{\odot}$ = 6.2 $\pm$ 0.2 for the luminosity of AG Car and showed that it remains constant during the light variations of the star, as was also found for other LBVs (R71: Wolf et al. \cite{wol81}, R127: Stahl and Wolf \cite{sta86}). Later on, Leitherer et al. (\cite{lei94}), in their study of the stellar wind of AG Car, found a slightly lower bolometric luminosity of 6.0 $\pm$ 0.2 based on ultraviolet observations combined with visual and near-infrared photometry. They also confirmed the nonvariability of the bolometric luminosity during the S Dor cycle. Given such a high value of bolometric luminosity, AG Car is well above the Humphreys-Davidson limit (Humphreys and Davidson \cite{hum79}), the limit above which a massive star becomes unstable and high mass loss episodes take place. However, in a recent study of the fundamental parameters of AG Car during visual minima, Groh et al. (\cite{gro09}) concluded that the bolometric luminosity of AG Car does change during the S Dor cycle. They obtained a maximum value of the bolometric luminosity during minimum phase of $\log L/L_{\odot}$ = 6.18, with a variation amplitude of $\Delta(\log L/L_{\odot}) \sim$ 0.17 dex. This luminosity variation lies inside the limits of the previously calculated values considering the errors. In all these studies, the distance 6 $\pm$ 1 kpc was used. Thackeray (\cite{thak50}) discovered that AG Car is surrounded by a nebulous shell that has the shape of an elliptical ring with nonuniform intensity. He measured the size of the nebula to be 39$\arcsec\times$ 30$\arcsec$ and the width of the ring to be about 5$\arcsec$. Stahl (\cite{sta87}) studied direct CCD imaging data of AG Car and suggested that very likely the nebula around this star is the result of a heavy mass-loss episode that took place during an S Dor outburst and not the result of interstellar material that was swept up by the stellar wind. Smith (\cite{smith91}) studied the dynamics of AG Car nebula based on spectroscopic observations in the optical waveband. She measured an average nebular expansion velocity of 70 km s$^{-1}$ and concluded that the shell expansion is roughly symmetrical. She also reported the presence of a jet-like bipolar mass outflow, expanding with a velocity of 83 km s$^{-1}$ and distorting the northeastern edge of the shell. Nota et al. (\cite{nota92}) studied the nebula around AG Car using high-resolution coronographic imaging and spectroscopic data so as to constrain its geometry. They confirmed the value of the expansion velocity of Smith (\cite{smith91}) and concluded that this nebular shell shows a deviation from spherical symmetry, based on the observed radial velocity variations and on the gas distribution observed in the images. Voors et al. (\cite{voo00}) studied AG Car in the infrared waveband by modeling ground-based infrared images taken at about 10\ \mbox{$\mu$m} and ISO spectroscopic observations, from which they derived the properties of the circumstellar dust. The dust shell is detached and slightly elongated. The ionized gas appears co-spatial with the dust. Polycyclic aromatic hydrocarbons (PAHs) are present. The dust shell contains mostly large grains, although very large grains are present and also a population of small, warm, out of thermal equilibrium grains that produce continuum and PAH bands emission. Duncan and White (\cite{dun02}) observed the AG Car nebula at the radio wavelengths (3 and 6 cm). Their 3 cm wavelength radio image revealed a nebula with a detached ring shape, very similar to the morphology in the H$\alpha$+$[\ion{N}{ii}]$ filter (see Sect.~\ref{sec:morphology of the nebula}). Nota et al. (\cite{nota02}) detected $ ^{12}CO\ J=1 \rightarrow0$ and $J=2\rightarrow1$ emission from AG Car for the first time. The CO line profiles indicate a region of molecular gas that is close to the star, expanding slowly and not originating from the gaseous nebula. They argued that the most plausible scenario to explain the observed profile is the presence of a circumstellar disk. Weis (\cite{weis08}), using deep H$\alpha$ imaging, reported the presence of diffuse emission in the form of a cone-like extension to the north of the AG Car nebula and concluded that it is clearly part of the nebula. It extends up to 28$\arcsec$ outside the nebula, increasing its size by about two times. It has a higher radial velocity than the ring. In this study we present an analysis and discussion of the images and the spectrum of the AG Car nebula taken by PACS (Photodetector Array Camera and Spectrometer, Poglitsch et al. \cite{poglitsch}), which is one of the three instruments on board the $\textit{Herschel}$ Space Observatory. The paper is organized as follows. In Sect.~\ref{sec:observations and data reduction} the observations and the data reduction procedure are presented. A description of the nebular morphology based on these observations is given in Sect.~\ref{sec:morphology of the nebula}. The dust continuum emission is modeled in Sect.~\ref{sec:dust continuum emission}, while the emission line spectrum is analyzed in Sect.~\ref{sec:emission line spectrum}. In Sect.~\ref{sec:discussion} a general discussion is presented and in Sect.~\ref{sec:conclusions} conclusions are drawn. \section{Observations and data reduction} \label{sec:observations and data reduction} \begin{figure*}[!] \resizebox{\hsize}{!}{\includegraphics*{agcar_pacs_3_90x90arcsec.eps}}\\% \resizebox{\hsize}{!}{\includegraphics*{test.eps}} \caption{Images of the nebula around AG Car. Top: PACS images at 70\ \mbox{$\mu$m}, 100\ \mbox{$\mu$m,} and 160\ \mbox{$\mu$m} from left to right. Bottom: the H$\alpha$+$[\ion{N}{ii}]$ image (left), the continuum image (right) and the image resulting from the subtraction of the continuum image from the H$\alpha$+$[\ion{N}{ii}]$ image after correcting for the position offsets and for the different filter transmissions using field stars (middle). The size of each image is 1.5$\arcmin\times1.5\arcmin$. The scale on the right corresponds to the surface brightness (arbitrary units). North is up and east is to the left.} \label{imag} \end{figure*} \subsection{Infrared observations} The infrared observations include imaging and spectroscopy of the AG Car nebula and were carried out using PACS in the framework of the \textit{Mass-loss of Evolved StarS (MESS)} Guaranteed Time Key Program (Groenewegen et al. \cite{groenewegen}). \begin{figure*} \sidecaption \includegraphics[height=5.75cm]{AGCar_optical_faint_cont.eps} \caption{Left: View of the nebula in the optical. The bright H$\alpha$+$[\ion{N}{ii}]$ ring nebula is illustrated in red (also shown in Fig.~\ref{imag} at the same scale) while the fainter H$\alpha$+$[\ion{N}{ii}]$ emission is shown in blue, differently scaled in surface brightness, revealing the northern extension (Weis \cite{weis08}). The continuum image that outlines the dust scattered nebula is presented in green. Right: Contour image of the optical emission from the nebula (green lines) superposed on the infrared image of the nebula at 70\ \mbox{$\mu$m} (also shown in Fig.~\ref{imag} at the same scale). The size of each image is 1.5$\arcmin\times1.5\arcmin$. North is up and east is to the left.} \label{agcar_faint} \end{figure*} The PACS imaging observations were carried out on August 12, 2010, which corresponds to $\textit{Herschel}$'s observational day (OD) 456. The observing mode was the scan map in which the telescope slews at constant speed (in our case the ``medium'' speed of $20\arcsec/\mbox{s}$) along parallel lines in order to cover the required area of the sky. Two orthogonal scan maps were obtained for each filter. Our final data set consists of maps at 70, 100 and 160\ \mbox{$\mu$m}. The observation identification numbers (obsID) of the four scans are 1342202927, 1342202928, 1342202929, and 1342202930; each scan has a duration of 157s. To perform the data reduction we used the Herschel Interactive Processing Environment (HIPE, Ott \cite{ott}) up to level 1. Subsequently, the data were reduced and combined using the Scanamorphos software (Roussel \cite{rous12}). The pixel size in the final maps is 2$\arcsec$ in the blue channel (70, 100 $\mu$m) and 3$\arcsec$ in the red channel (160 $\mu$m). It should be mentioned that the $\textit{Herschel}$ PACS point spread function (PSF) full widths at half maximum (FWHMs) are 5$\farcs$2, 7$\farcs$7, and 12$\arcsec$ at 70\ \mbox{$\mu$m}, 100\ \mbox{$\mu$m} and 160\ \mbox{$\mu$m}, respectively. The spectrum of the AG Car nebula was taken on June 5, 2010 (OD 387), with the PACS integral-field spectrometer that covers the wavelength range from 52\ \mbox{$\mu$m} to 220\ \mbox{$\mu$m} in two channels that operate simultaneously in the blue 52-98\ \mbox{$\mu$m} band (second order: B2A 52-73\ \mbox{$\mu$m} and B2B 70-105\ \mbox{$\mu$m}), and the red 102-220\ \mbox{$\mu$m} band (first order: R1A 133-220\ \mbox{$\mu$m} and R1B 102-203\ \mbox{$\mu$m}). Simultaneous imaging of a $47\arcsec \times 47\arcsec$ field of view is provided that is resolved in $5\times5$ square spatial pixels (i.e., spaxels). The two-dimensional field of view is then rearranged along a $1\times25$ pixel entrance slit for the grating via an image slicer employing reflective optics. Its resolving power is $\lambda/\delta\lambda \sim 940-5500$ depending on the wavelength. The observing template was the spectral energy distribution (SED) that provides a complete coverage between 52 and 220\ \mbox{$\mu$m}. The two obsIDs are 1342197792 and 1342197793. HIPE was also used for the data reduction. The standard reduction steps were followed and in particular the subtraction of the background spectrum obtained through chopping and nodding. \begin{figure}[!] \resizebox{\hsize}{!}{\includegraphics*{AG_Car_RGB_ds9_7_7arcmin.ps}} \caption{Three-color image (70 \ \mbox{$\mu$m} in blue, 100 \ \mbox{$\mu$m} in green and 160 \ \mbox{$\mu$m} in red) of the AG Car nebula. The LBV nebula appears located inside a cavity in the interstellar medium. The size of the image is 7$\arcmin\times7\arcmin$. North is up and east is to the left.} \label{agcar_RGB} \end{figure} \subsection{Visible observations} The optical images of the AG Car nebulae were obtained on April 6, 1995, with the 3.6 m telescope at the European Southern Observatory (ESO), La Silla, Chile. A series of short (1s-10s) and longer (30s-60s) exposures were secured in a H$\alpha$+$[\ion{N}{ii}]$ filter ($\lambda_{\rm c}$ = 6560.5\AA ; {\sc FWHM} =62.2\AA ) and in a continuum filter just redwards ($\lambda_{\rm c}$ = 6644.7\AA; {\sc FWHM} =61.0\AA ). The EFOSC1 camera was used in its coronographic mode: for the longer exposures, the 8$\arcsec$ circular coronographic mask was inserted in the aperture wheel and positioned on the central star while the Lyot stop was inserted in the grism wheel (Melnick et al. \cite{mel89}). The frames were bias-corrected and flat-fielded. The CCD pixel size was 0$\farcs$605 on the sky. The night was photometric and the seeing around 1$\farcs$2. In order to properly calibrate the images, three spectrophotometric standard stars and three planetary nebulae with known H$\alpha$ flux were observed in the H$\alpha$+$[\ion{N}{ii}]$ filter. \section{Morphology of the nebula} \label{sec:morphology of the nebula} The three PACS infrared images of the nebula around the LBV AG Car at 70\ \mbox{$\mu$m}, 100\ \mbox{$\mu$m}, and 160\ \mbox{$\mu$m} along with images taken at optical wavelengths are shown in Figs.~\ref{imag} and ~\ref{agcar_faint}. The infrared images reveal a dusty shell nebula with a clumpy ring morphology that is clearly detached from the central star (not visible at these wavelengths). The brightness of the nebula is not uniform, with the southwestern part being the brightest one. This agrees with the nonuniform brightness of the images at 12.5\ \mbox{$\mu$m} and 12.78\ \mbox{$\mu$m} analyzed by Voors et al. (\cite{voo00}). An axis at position angle (PA) $\sim$ 160\degr\ can be defined (north: PA = 0\degr; east: PA = 90\degr). Cuts of the 70 $\mu$m PACS image through PA $\sim$ 160\degr\ and PA $\sim$ 70\degr\ (Fig.~\ref{synt_cuts}) show peaks at a radius of about 14$\arcsec$, although with different intensities. The ring extends up to $\sim42\arcsec$. These values correspond to 0.4 pc and 1.2 pc, respectively, at a distance of 6 kpc. On top of this central circular nebula, there is a northern faint extension. No other more extended nebula associated with the star can be detected. A three-color infrared image of the nebula and its environment is illustrated in Fig.~\ref{agcar_RGB}, which shows best this faint northern extension first reported by Weis (\cite{weis08}), although the resolution is not good enough to reveal its detailed structure. The LBV nebula seems to be located in a cavity in the interstellar medium, probably excavated by the star in a previous evolutionary phase. The radius of this empty cavity is about 2.5$\arcmin$, which corresponds to 4.4 pc at a distance of 6 kpc. The H$\alpha$+$[\ion{N}{ii}]$ image (Fig.~\ref{imag}) shows that the gas nebula around AG Car forms an elliptical shell with an average outer radius of $\sim20\arcsec$ and an inner radius of $\sim$11$\arcsec$. To more accurately investigate the morphology of the nebula in the optical, a three-color image is presented in Fig.~\ref{agcar_faint}. The fainter nebular optical emission reveals the northern extension described in Weis (\cite{weis08}). The nebula that surrounds the bright ring extends up to $\sim23\arcsec$, while the north extension goes up to $\sim36\arcsec$ from the center of the nebula. These numbers correspond to 0.7 pc and 1 pc, respectively, at a distance of 6 kpc, in agreement with the size of the nebula given in Weis (\cite{weis08}, \cite{weis11}). In the H$\alpha$ light, the dynamics points to a spherically expanding shell distorted by a more extended bipolar nebula (Smith \cite{smith91}; Nota et al. \cite{nota92}). In projection on the sky, the shell appears as an elliptical ring with PA $\sim$ 131\degr, different from the infrared shell PA. Nevertheless, the contour image of the optical emission (both bright and faint) superimposed on the infrared image of the nebula at 70\ \mbox{$\mu$m} as illustrated in the right panel of Fig.~\ref{agcar_faint} shows that the overall morphology of the gas nebula is similar to the infrared dust morphology, although the H$\alpha$+$[\ion{N}{ii}]$ ring nebula appears slightly smaller and more elliptical than the infrared nebula. The bright region at the southwestern part of the gas nebula coincides with the bright region of the infrared dust nebula. The northern faint extended structure, which appears both in the infrared and in the optical, is likely a lobe of the bipolar nebula. The extension seen to the south in the H$\alpha$+$[\ion{N}{ii}]$ map and in the velocity maps of Smith (\cite{smith91}) could constitute a part of a second fainter lobe. The system may thus consist of a typical bipolar nebula seen roughly through the poles (i.e., at inclination $\lesssim$ 30\degr), with two faint lobes and a bright waist. The nebula is also clearly detected in the optical continuum filter (Fig.~\ref{imag}), indicating significant dust scattering. The morphology of the dust reflection nebula appears somewhat different from the morphology of the H$\alpha$+$[\ion{N}{ii}]$ emission. This reflection nebula around AG Car was first described in Paresce and Nota (\cite{par89}) and its stunning structure resolved with the Hubble Space Telescope (Nota et al. \cite{nota95}, \cite{nota96}). More specifically, in the optical continuum image (Figs.~\ref{imag} and ~\ref{agcar_faint}) the ring appears circular and very clumpy with a jet-like feature that starts from the central part of the nebula and extends towards the southwestern part, which is the brightest region of the dust and gas emission. The northern extension of the nebula is not detected in the optical continuum. As does the H$\alpha$+$[\ion{N}{ii}]$ ring nebula, the optical continuum emission appears slightly inside the infrared continuum emission, although a detailed comparison is difficult given the lower spatial resolution of the PACS images. The difference in morphology between the optical continuum and the H$\alpha$+$[\ion{N}{ii}]$ nebulae may arise from anisotropic illumination and thus ionization of different parts of the nebula owing to its clumpy structure. Considering the nebular expansion velocity, ${\rm v}_{\mathrm{exp}}$, of 70 km s$^{-1}$ measured by Smith (\cite{smith91}), the kinematic age, $t_{\mathrm{kin}}$, of the nebula can be estimated. As mentioned above, the nebula in the infrared extends to 1.2 pc in radius, \textit{r}, so it has a kinematic age of $t_{\mathrm{kin}} = r /{\rm v}_{\mathrm{exp}}$ = 1.7$\times10^4$ years. The temporal difference between the inner and the outer radius of the nebula is 1.1$\times10^4$ years. \section{Dust continuum emission} \label{sec:dust continuum emission} Integrated flux densities were derived for the nebular shell at the three PACS wavelengths by performing aperture photometry on the PACS images. We also used imaging data taken from the archives of the Infrared Astronomical Satellite (IRAS) mission (Neugebauer et al. \cite{neug84}) and the Infrared Astronomical Mission AKARI (Murakami et al. \cite{mura07}). In these archival data we did not include the IRAS observation at 12 $\mu$m because it is only an upper limit or at 100 $\mu$m because it is not of high quality. We note that the beam size of the IRAS and AKARI observations is large enough to fully encompass the ring nebula around AG Car. Images from the SPIRE (Spectral and Photometric Imaging Receiver, Griffin et al. \cite{grif10}) instrument on board the $\textit{Herschel}$ Space Observatory were also included. They were taken from the observations of the $\textit{Herschel}$ Infrared Galactic Plane survey (Hi-GAL, Molinari et al. \cite{mol10}) made immediately public for legacy. The Hi-GAL observations of the field around AG Car were retrieved from the archive processed up to level 2. Only the maps at 250 $\mu$m and at 350 $\mu$m were used, because at 500 $\mu$m the nebula is very faint so that flux determination is highly uncertain. Integrated flux densities were derived for the nebular shell at these two SPIRE wavelengths by performing aperture photometry on the images. We applied photometric color correction to all flux densities derived from the observations of these three space missions. With this correction, the monochromatic flux densities that refer to a constant energy spectrum are converted to the true object SED flux densities at the photometric reference wavelengths of each instrument. We used the flux density ratio to derive the color temperature for the color correction of the IRAS data and then we chose the corresponding color-correction factor (Beichman et al. \cite{beich88}). The ratio R (25,60) corresponds to a temperature of 140 K, so the correction factors for this temperature were used. To color correct the AKARI FIS and IRC data we fitted a blackbody to the two data sets independently using the 25 $\mu$m IRAS observation because we needed a measurement near the maximum of the curve. These fits led us to adopt the color-correction factors that correspond to a temperature of 150 K for FIS (Yamamura et al. \cite{yam10}) and 220 K for IRC data (Lorente et al. \cite{lorente08}). To estimate the color correction of the Herschel-PACS data, we fitted a blackbody, considering again the 25 $\mu$m IRAS observation. This fit gave a temperature of 150 K, for which we adopted the corresponding correction factor (M\"{u}ller et al. \cite{muller}). For the SPIRE data, the instructions for color correction given in the SPIRE Handbook \footnote{http://herschel.esac.esa.int/Docs /SPIRE/spire\textunderscore handbook.pdf}, were followed. \begin{table}[t] \caption{Color-corrected nebular flux densities.} \label{table:1} \centering \begin{tabular}{l c c c c c} \hline\hline \\ Spacecraft-Instrument & Date & $\lambda$ & $F_{\nu}$ & Error \\ & & ($\mu$m) & (Jy) & (Jy) \\ \hline\hline IRAS\tablefootmark{a} & 1983 & 25 & 187.5 & 9.4 \\ & & 60 & 177.7 & 28.4 \\ \hline AKARI-IRC \tablefootmark{b} & 2007 & 9 & 9.04 & 0.14 \\ & & 18 & 119.2 & 0.24 \\ AKARI-FIS \tablefootmark{c} & 2007 & 65 & 229.3 & 6.53 \\ & & 90 & 81.0 & 3.9 \\ & & 140 & 45.5 & 5.2 \\ & & 160 & 35.1 & 4.6 \\ \hline Herschel-PACS \tablefootmark{d} & 2010 & 70 & 173 & 2 \\ & & 100 & 103 & 3 \\ & & 160 & 42 & 3 \\ Herschel-SPIRE \tablefootmark{e} & 2010 & 250 & 8.1 & 2 \\ & & 350 & 3.0 & 1 \\ \hline \end{tabular} \tablefoot{Data from: \tablefoottext{a}{IRAS Point Source Catalog (Beichman et al. \cite{beich88}).} \tablefoottext{b}{Akari/IRC Point Source Catalogue (Ishihara et al. \cite{isi10}).} \tablefoottext{c}{Akari/FIS Bright Source Catalogue (Yamamura et al. \cite{yam10}).} \tablefoottext{d}{This work.} \tablefoottext{e}{Observations of Hi-GAL (Molinari et al. \cite{mol10}) retrieved from the $\textit{Herschel}$ archive.} } \end{table} The corrected measurements are presented in Table~\ref{table:1}. They enabled us to construct the infrared SED of the nebula, along with the archived spectrum that was part of the observations carried out by the Infrared Space Observatory (ISO) mission (Kessler et al. \cite{kessler96}). A detailed discussion on this ISO-LWS spectrum can be found in Voors et al. (\cite{voo00}). The infrared SED of the nebula around AG Car obtained at different epochs with the various instruments is shown in Fig.~\ref{agcar_sed_2dust}. All these measurements agree very well within the uncertainties except the one at 90 $\mu$m. This is likely due to the uncertainty on the color correction that was stronger for the 90 $\mu$m (AKARI-FIS) data point. A model of the dust nebula around the LBV AG Car has been carried out in the past by Voors et al. (\cite{voo00}). They used a one-dimensional radiative transfer code to fit both imaging and spectroscopic infrared data. To further constrain the dust properties, we use the AKARI archive imaging data and the new PACS and SPIRE imaging data in addition to the IRAS imaging data and the ISO infrared spectrum. The PACS imaging allows us to measure the nebular radius at the wavelengths of the bulk of dust emission. To model the dust shell we only considered the spectrum at $\lambda > $ 20 $\mu$m. The spectrum at $\lambda < $ 12 $\mu$m comes from the central star that Voors et al. (\cite{voo00}) fitted with a spherical non-LTE model atmosphere. For the spectrum between $\lambda \sim$ 14 and 20 $\mu$m they argued that it too likely comes from the central star and not from some extended source. The two-dimensional radiative transfer code 2-Dust (Ueta and Meixner \cite{uet03}) was used to model and interpret the dust emission spectrum and the far-infrared images. This is a publicly available versatile code that can be supplied with various grain size distributions and optical properties as well as complex axisymmetric density distributions. It should be mentioned here that since the PACS spectral field of view is smaller than the nebular size, the PACS spectrum was not taken into consideration for the dust model as this spectrum is indeed fainter than the PACS photometric points that contain the total flux of the nebula. To model the PACS ring of dust with the code 2-Dust, it is necessary to consider the morphology of the nebula revealed through the infrared PACS and the optical images so as to choose the best geometric parameters for the axisymmetric dust density distribution model. The 2-Dust code uses a normalized density distribution function (Meixner et al. \cite{meix02}) that is based on a layered shell model, \begin{equation} \label{density distribution} \begin{split} \rho(R,\theta)=\left(\frac{R}{R_{\mathrm{min}}}\right) ^{-B\left\{1+C\sin^F\theta\left[e^{-(R/R_{\mathrm{sw}})^D} /e^{-(R_{\mathrm{min}}/R_{\mathrm{sw}})^D}\right]\right\}} \\ \times\left\{1+A(1-\cos\theta)^F\left[e^{-(R/R_{\mathrm{sw}})^E} /e^{-(R_{\mathrm{min}}/R_{\mathrm{sw}})^E}\right]\right\} \end{split}, \end{equation} where $\rho(R,\theta)$ is the dust mass density at radius \textit{R} and latitude \textit{$\theta$}, $R_{\mathrm{min}}$ is the inner radius of the shell, and $R_{\mathrm{sw}}$ is the superwind radius that defines the boundary between the spherical wind and the axisymmetric superwind. The first term represents the radial profile of the spherical wind; the parameters A-F define the density profile; the radial factor \textit{B} can also be a function of the latitude through the elongation parameter \textit{C}; \textit{A} is the equatorial enhancement parameter; the parameter \textit{F} defines the flatness of the shell; and \textit{D} and \textit{E} are the symmetry transition parameters that describe the abruptness of the geometrical transition in the shell. It should be specified that we only consider the bright ring-nebula for this model. No attempt has been made to model the nebular northern extension since it is faint in the infrared and not clearly resolved. \begin{figure}[!] \resizebox{\hsize}{!}{% \includegraphics*[width=5.9cm]{agcar_pacs_70_90x90arcsec.eps}% \includegraphics*[width=5.9cm]{agcar_synth_70_90_90arcsec_rot160.eps}}\\% \resizebox{\hsize}{!}{\includegraphics*{agcar_profils.ps}} \caption{Top left: the $1.5\arcmin \times 1.5\arcmin$ image of the nebula around AG Car observed with PACS at 70 $\mu$m. North is up and east to the left. Top right: the synthetic image computed with 2-Dust using $r_{\rm in} = 14\arcsec$ and $r_{\rm out} = 42\arcsec$ and convolved with the PACS PSF. Bottom: Cuts with PA=160\degr\ and 70\degr\ through the central part of the nebula at 70 $\mu$m, observed (black) and synthetic (red) ones.} \label{synt_cuts} \end{figure} A purely spherical model cannot reproduce the observed morphology of the dust shell because there would be too much emission at the center of the ring with respect to the observations. A sphere with equatorial enhancement may be considered to reproduce the difference in intensity between the two cuts on the infrared image. To appear nearly circular with a clear central hole in projection, a spherical shell with equatorial enhancement must be seen at small inclination $\lesssim$ 30\degr, roughly through the poles. This scheme is in agreement with the observed global morphology. At very low inclination it is difficult to reproduce the large difference in intensity between the two cuts in an axisymmetric model like this one. On the other hand, by increasing the inclination too much a strongly elliptical ring would be seen, which is not observed. In addition to the equatorial enhancement, the structure appears clumpy. These inhomogeneities cannot be reproduced by the present model. The best approximate axisymmetric model for the observed ring is a sphere with equatorial enhancement (\textit{A} $\sim$ 5) that is seen at small inclination ($\sim$ 30\degr). The values for the other five geometric parameters of Eq.~\ref{density distribution} are B=3, C=0, D=0, E=0, F=1 to keep the model simple and limit the number of free parameters. \begin{figure}[!] \resizebox{\hsize}{!}{\includegraphics*{agcar_sed_2dust_spire_f.ps}} \caption{Infrared SED of the nebula around the LBV AG Car from data collected at different epochs: the ISO-LWS spectrum and color-corrected photometric measurements from IRAS, AKARI and $\textit{Herschel}$ (PACS and SPIRE) space observatories. Results of the 2-Dust model fitting are illustrated. Data at $\lambda < $ 20 $\mu$m are not considered in the fit. The best fit (solid line) is achieved considering two populations of dust grains (dashed lines).} \label{agcar_sed_2dust} \end{figure} The synthetic image produced by 2-Dust and convolved with the PACS PSF is given in the top right panel of Fig.~\ref{synt_cuts}. By comparing the PACS image to the synthetic one, we determined the inner, $r_{\mathrm{in}} = 14\arcsec$, and the outer, $r_{\mathrm{out}} = 42\arcsec$, radii of the dust ring. In the bottom panel of Fig.~\ref{synt_cuts}, cuts with PA=160\degr\ and 70\degr\ through the central part of the observed and the simulated nebula are illustrated. Although the basic morphology (radii, thickness and axisymmetry of the shell) reproduced by the model agrees with the observed one, the intensity of the peaks does not because no attempt was made to fit the clumps, as mentioned earlier. After constraining the nebular geometry we can proceed with the model of the dust SED. For the stellar parameters we adopted the distance D = 6 kpc, the luminosity $\log L/L_{\odot}$ = 6.1, and the temperature $T_{\mathrm{eff}} = 20000$ K (Voors et al. \cite{voo00}, Groh et al. \cite{gro09}). The infrared SED of AG Car (Fig.~ \ref{agcar_sed_2dust}) is too broad to be reproduced with only one population of dust grains, so we considered two populations of grains with the same composition but different sizes. Voors et al. (\cite{voo00}) showed that the dust in this nebula contains large grains, up to 40 $\mu$m in radius, and is dominated by amorphous silicates with little contribution from crystalline species, and more specifically pyroxenes with a 50/50 Fe to Mg abundance. We therefore adopted a similar dust composition. The optical constants of silicates given by Dorschner et al. (\cite{dor95}) were used for both populations, extrapolated to a constant refraction index in the far-ultraviolet. The size distribution for the dust grains of Mathis et al. (\cite{mat77}, hereafter MRN) was assumed, for each of the two populations: $n(a) \propto a^{-3.5}$ with $a_{\rm min} < a < a_{\rm max}$, where $a$ is the grain radius. The model can be adjusted to the data by varying $a_{\rm max}$ (or $a_{\rm min}$), which controls the 20$\mu$m / 100$\mu$m flux density ratio, and the opacity, which controls the strength of the emission. \begin{figure}[t] \centering \includegraphics[width=7cm]{agcar_spec_foot.eps} \caption{Footprint of the PACS spectral field of view on the image of the AG Car nebula at 70\ \mbox{$\mu$m}. Each number pair is the label of a specific spaxel. North is up and east is to the left.} \label{agcar_foot} \end{figure} The best fit (Fig.~\ref{agcar_sed_2dust}) was achieved with the use of the following populations of dust grains. The first is a population of small grains with radii from 0.005 to 1 $\mu$m, which is responsible for the emission at $\lambda < $ 40 $\mu$m. The second is a population of large grains with radii from 1 to 50 $\mu$m, which is responsible for the slope of the SED at $\lambda > $ 70 $\mu$m. It should be pointed out that the large grains are necessary to reproduce the observed infrared SED. Several attempts were made to fit the SED using different grain sizes. They showed that $a_{\rm max}$ cannot be very different from 50 $\mu$m for the population of large grains. More details are given in Appendix A. The total mass of dust derived from the modeling is $M_{\rm dust} \sim$ 0.21 M$_{\odot}$ (0.05 M$_{\odot}$ from the small dust grains and 0.16 M$_{\odot}$ from the large ones), with an uncertainty of $\sim\ 20\%$. This result is in agreement with the total dust mass found by Voors et al. (\cite{voo00}). The small grains have temperatures from 88 K at $r_{\mathrm{in}}$ to 62 K at $r_{\mathrm{out}}$, while the large grains have temperatures from 43 K at $r_{\mathrm{in}}$ to 29 K at $r_{\mathrm{out}}$. Similar results are obtained when fitting modified blackbody (BB) curves to the SED (see Appendix B for details). As described in the introduction, AG Car is a variable star with photometric and spectroscopic variations. Although the data used for the dust model are obtained at different epochs, the visual magnitude of the star is approximately the same at the epochs of the IRAS, ISO, AKARI, and $\textit{Herschel}$ observations. In addition, AG Car varies under a roughly constant bolometric luminosity (Sect.~\ref{sec:introduction}). By keeping the luminosity constant and changing the radius and the temperature of the star within reasonable limits, we do not see significant changes in the model results. \section{Emission line spectrum} \label{sec:emission line spectrum} \subsection{Spectrum overview} Figure~\ref{agcar_foot} illustrates the footprint of the PACS spectral field of view on the image of the nebula at 70\ \mbox{$\mu$m}. This field of view is composed of 25 (5$\times$5) spaxels, each corresponding to a different part of the nebula, but it is not large enough to cover the whole nebula. The integrated spectrum of the nebula over the 25 spaxels is shown in Fig.~\ref{agcar_spec}. Below 55\ \mbox{$\mu$m} the shape of the continuum results from a spectral response correction in this range that has not yet been perfected, while above 190\ \mbox{$\mu$m} it results from a light leak from the second diffraction order of the grating to the first one. The following forbidden emission spectral lines are detected on the dust continuum: $[\ion{O}{i}]$ $\lambda\lambda$ 63, 146\ \mbox{$\mu$m}, $[\ion{N}{ii}]$ $\lambda\lambda$ 122, 205\ \mbox{$\mu$m}, and $[\ion{C}{ii}]$ $\lambda$ 158\ \mbox{$\mu$m}. The absence of higher ionization lines indicates that the ionization state of the gas in the nebula around AG Car is not as high as in the case of the nebulae around other LBVs, for example WRAY 15-751 (Vamvatira-Nakou et al. \cite{vamv13}). This implies that the gas temperature is lower. \subsection{Line flux measurements} A Gaussian fit was performed on the line profiles to measure the emission line intensities in each of the 25 spectra (Fig.~\ref {agcar_foot}) using the Image Reduction and Analysis Facility (IRAF, Tody \cite{tod86},\cite{tod93}). The table with these measurements is given in Appendix C. We note that not all the lines are detected in the outer spaxels and that all fluxes reach their highest values at the spaxel (3,3). This spaxel corresponds to the southwestern part of the nebula, which is the brightest part (see Sect.~\ref{sec:morphology of the nebula}). Maps of the line intensities were created for each of the five detected lines in an effort to investigate whether there are differences in the gas properties for distinct parts of the nebula. There are only 25 spaxels, and the coverage of the nebula is not complete, so we cannot really see the full nebula in these ``spectroscopic images''. The only wavelength at which we barely see the nebular ring is that of 122\ \mbox{$\mu$m}. Furthermore, maps of line intensity ratios of every detected line to the $[\ion{N}{ii}]$ 122\ \mbox{$\mu$m} line were created. The differences between distinct nebular regions are not significant or convincing. Since these maps are difficult to interpret given the large uncertainties on the fluxes and the wavelength-dependent PSF, they were not considered in the present study. To measure the total flux of the nebula in each line we need to use the integrated spectrum over the 25 spaxels, but as already mentioned above the beam size of the PACS spectrometer is smaller than the size of the nebula. Consequently, the fluxes measured using the sum of the 25 spaxels do not correspond to the real nebular fluxes. For this reason, the PACS spectrum was corrected using the three PACS photometric observations. A modified BB was fitted to these data and another one to the continuum of the PACS spectrum, for wavelengths smaller than 190\ \mbox{$\mu$m}. The ratio of the two curves gives the correction factor. In other words, the spectrum was scaled to the photometry. This factor linearly depends on the wavelength and goes from 1.11 at 55\ \mbox{$\mu$m} to 2.12 at 185\ \mbox{$\mu$m}. This correction assumes a constant line-to-continuum ratio and is analogous to the point source correction applied in the pipeline to correct the effect of flux lost for a point source. The $[\ion{N}{ii}]$ 205\ \mbox{$\mu$m} line has a problematic calibration in PACS. Consequently, its flux needs to be corrected before being used in the following analysis. More precisely, the flux in $[\ion{N}{ii}]$ 205\ \mbox{$\mu$m} is incorrect for all PACS measurements owing to a light leak, superimposing a lot of flux from 102.5\ \mbox{$\mu$m} at that wavelength. The relative spectral response function (RSRF) used to reduce the data suffers from the same light leak. Consequently, when this RSRF is applied during the data reduction, the signal at wavelengths $\gtrsim$ 190um is divided by a number that is too high. The continuum at these wavelengths is irremediably lost, but provided one can ``scale-back up'' with the right number to compensate for the exaggerated RSRF, one can recover the line-flux. Using instrument test data obtained on the ground with calibration light sources set at different and known temperatures, one can invert the problem and reconstruct the ``clean'' RSRF, i.e., in the absence of light leak. This suffers from some defects and a large uncertainty due to the propagation of errors and to the very low response of the instrument at these wavelengths. A correction factor could nevertheless be derived from it, and confirmed within a certain margin by comparison of the line fluxes obtained for a few sources by both PACS and SPIRE at that wavelength. We finally found that the measured $[\ion{N}{ii}]$ 205\ \mbox{$\mu$m} flux should be multiplied by a correction factor of 4.2. An error of 25 \% was assumed for the final corrected $[\ion{N}{ii}]$ 205\ \mbox{$\mu$m} flux. \footnote{This part of the infrared spectrum of AG Car has also been observed with SPIRE as part of the MESS program. Unfortunately, these data cannot be used so as to have a more precise flux for the line $[\ion{N}{ii}]$ 205\ \mbox{$\mu$m} because the whole ring nebula is outside of the detector coverage owing to the geometry of the detector array and because the observing mode was a single pointing and not a raster map. Consequently, any attempt to recover the nebular flux has huge uncertainty and we decided not to include the SPIRE spectroscopic data in our study.} The emission line measurements of the nebula integrated over the 25 spaxels, before and after the correction for the missing flux, are given in Table~\ref{table:2} along with the correction factor (c.f.) at each wavelength. It should be mentioned that the flux measurements of the three lines present in the ISO-LWS spectrum of AG Car ($[\ion{O}{i}]$ $\lambda$ 63\ \mbox{$\mu$m}, $[\ion{N}{ii}]$ $\lambda$ 122\ \mbox{$\mu$m} and $[\ion{C}{ii}]$ $\lambda$ 158\ \mbox{$\mu$m}) do agree with the corrected values from the PACS spectrum within the errors, showing that the correction for the missing flux is essentially correct. \begin{table}[h] \caption{Line fluxes of the nebula around AG Car.} \label{table:2} \centering \begin{tabular}{l c c c c } \hline\hline Ion & $\lambda$ & $F\ $(25 spaxels) & c. f. & $F\ $(corrected) \\ & ($\mu$m) &($10^{-15}$ W~m$^{-2}$) & & ($10^{-15}$ W~m$^{-2}$) \\ \hline\hline $[\ion{O}{i}]$ & 63 & 7.7 & 1.17 & 9.0 $\pm$ 1.8 \\ $[\ion{N}{ii}]$ & 122 & 23.6 & 1.64 & 38.7 $\pm$ 7.7 \\ $[\ion{O}{i}]$ & 146 & 0.6 & 1.83 & 1.1 $\pm$ 0.3 \\ $[\ion{C}{ii}]$ & 158 & 4.1 & 1.92 & 7.9 $\pm$ 1.6 \\ $[\ion{N}{ii}]$ & 205 & 1.1 & 2.26\tablefootmark{a}$\times$4.2\tablefootmark{b} & 10.3 $\pm$ 2.6 \\ \hline \end{tabular} \\ \tablefoot{ \tablefoottext{a}{Missing flux correction} \tablefoottext{b}{PACS/SPIRE cross-calibration factor} } \end{table} \subsection{Photoionization region characteristics} The detected emission lines $[\ion{N}{ii}]$ 122, 205\ \mbox{$\mu$m} are associated with the \ion{H}{ii} region of the nebula around AG Car. The other three detected emission lines originate from a region of transition between ionized and neutral hydrogen and may indicate the presence of a photodissociation region (PDR). Their analysis is given in the next subsection. \subsubsection{H$\alpha$ flux} The H$\alpha$+$[\ion{N}{ii}]$ flux from the nebula was estimated by integrating the surface brightness over the whole nebula. Contamination by field stars and the background was corrected and the emission from the occulted central part extrapolated. The continuum flux from the reflection nebula was measured in the adjacent filter, accounting for the difference in filter transmissions. However since AG Car is a strong emission-line star, the reflected stellar H$\alpha$ must also be subtracted. Considering the H$\alpha$ equivalent widths measured by Schulte-Ladbeck et al. (\cite{sch94}) and Stahl et al. (\cite{sta01}) for AG Car in 1993-1994 (i.e., accounting for $\sim$ 1.5 years of time-delay) we estimate the final contamination due to the reflection nebula to be 20\%. The contribution of the strong $[\ion{N}{ii}]$ lines was then subtracted using the $[\ion{N}{ii}]$ /H$\alpha$ ratios from available spectroscopic data and the transmission curve of the H$\alpha$+$[\ion{N}{ii}]$ filter. The conversion to absolute flux was done with the help of the three spectrophotometric standard stars and three planetary nebulae observed in the same filter; the conversion factors derived from these six objects show excellent internal agreement. We measured $F_{0}(\mathrm{H}\alpha$) = 1.1 $\times$ 10$^{-10}$ ergs~cm$^{-2}$~s$^{-1}$ uncorrected for reddening. The uncertainty amounts to $\sim$ 20\%. Adopting E(B$-$V) = 0.59 $\pm$ 0.03 (de Freitas Pacheco et al. \cite{pac92}), we derived $F_{0}(\mathrm{H}\alpha$) = 4.2 $\pm$ 0.9 $\times$ 10$^{-10}$ ergs~cm$^{-2}$~s$^{-1}$ for the AG Car nebula. This flux is higher by a factor of 2 than the fluxes measured by Stahl (\cite{sta87}), Nota et al. (\cite{nota92}) and de Freitas Pacheco et al. (\cite{pac92}) in 1986, 1989, and 1991, but is compatible with the H$\beta$ flux measured by Perek (\cite{per71}) in 1969 (i.e., $F_{0}(\mathrm{H}\alpha) \simeq$ 3 $\times$ 10$^{-10}$ ergs~cm$^{-2}$~s$^{-1}$ with H$\alpha$/H$\beta$ = 6 and E(B$-$V) = 0.59). The flux density from the reflection nebula is $F_{\lambda}$ = 3.9 $\times$ 10$^{-13}$ ergs~cm$^{-2}$~s$^{-1}$~\AA$^{-1}$ at 6650~\AA\ (the central wavelength of the continuum filter). The high value of $F_{0}(\mathrm{H}\alpha)$ we find is in agreement with the radio flux, also observed in 1994-1995 (Duncan and White \cite{dun02}), and E(B$-$V) = 0.59 \begin{figure*} \resizebox{\hsize}{!}{\includegraphics*{AGCar_25sp_spectrum.eps}} \caption{PACS spectrum of the nebula around AG Car, integrated over the 25 spaxels. Indicated are the lines [O{\sc i}], [N{\sc ii}], and [C{\sc ii}]. Below 55 $\mu$m the shape of the continuum results from a spectral response function correction that has not yet been perfected. Above 190 $\mu$m the shape results from a light leak from the second diffraction order of the grating in the first one. The different observing bands are indicated with different colors. We note that the spectral resolution depends on the waveband.} \label{agcar_spec} \end{figure*} \subsubsection{Electron density} Smith et al. (\cite{smith97_2}) found a non-constant nebular electron density, $n_\mathrm{e}$, that varies from 600 to 1050 cm$^{-3}$ using the optical $[\ion{S}{ii}]$ 6731/6717 ratio as an electron density diagnostic. Their result is in agreement with those of Mitra and Dufour (\cite{mitra90}) and Nota et al. (\cite{nota92}) who used the same ratio as a diagnostic. In the infrared waveband, the $[\ion{N}{ii}]$ 122/205\ \mbox{$\mu$m} ratio is a diagnostic for the electron density of the nebula at low density, $1\ \mathrm{cm}^{-3} \leq n_\mathrm{e}\leq 10^3\ \mathrm{cm}^{-3}$ (Rubin et al. \cite{rub94}). Considering the values of Table~\ref{table:2}, this ratio is equal to 3.8 $\pm$ 1.2 for the whole nebula. The package \textit{nebular} of the IRAF/STSDAS environment (Shaw \& Dufour \cite{shaw}) was used for the calculation of the electron density. An electron temperature, $T_\mathrm{e}$, constant throughout the nebula and equal to 6350 $\pm$ 400 K was used for all of the following calculations. This is the average temperature calculated by Smith et al. (\cite{smith97_2}). The electron density is then found to be $160 \pm 90\ \mathrm{cm}^{-3}$. The calculated electron density based on the infrared data is much lower than the density based on the optical data. This discrepancy is usual and has also been observed in planetary nebulae (Liu et al. \cite{liu01}, Tsamis et al. \cite{tsam03}). When the density of a nebula is spatially inhomogeneous, different line ratios used as density diagnostics lead to different values of the density. This is related to the difference in the critical density between the lines taken into consideration for the density calculation (Rubin \cite{rub89}, Liu et al. \cite{liu01}). The lines $[\ion{N}{ii}]$ 122, 205\ \mbox{$\mu$m} have lower critical densities than the lines $[\ion{S}{ii}]$ 6731, 6717 \AA,\ which means that the calculated density using the first pair of lines is smaller than the density using the second pair (Rubin \cite{rub89}). In the following calculations we will use our estimate of the electron density based on infrared data because the electron density is best determined when it is similar to the critical density of the lines whose ratio is used as a diagnostic (Rubin et al. \cite{rub94}). Otherwise, any attempt to calculate ionic abundances will give incorrect results (Rubin \cite{rub89}, Liu et al. \cite{liu01}). \subsubsection{Ionizing flux} To calculate the radius of the Str\"omgren sphere, $R_{S}$, and the rate of emission of hydrogen-ionizing photons, $Q_{0}$, a steady nonvariable star must be considered. Such an analysis can be done in the case of a variable star like AG Car if the recombination time is longer than the variability timescale of the ionizing star. The recombination time is given by $\tau_{rec}= 1/n_\mathrm{e}\alpha_\mathrm{B}\ \mathrm{yr}$ (Draine \cite{draine11}), where $\alpha_\mathrm{B}$ is the recombination coefficient. Using the adopted value for the electron density and the assumed electron temperature, the recombination time was estimated to be about 520 yr. It is much longer than the timescale of the variability (5-10 yr) exhibited by the central star of the nebula, and this conclusion still holds if the higher electron density derived in the optical is considered. Therefore, an average nonvariable star is a valid approximation in our case. The values of $Q_{0}$ and $R_{S}$ can be determined first by using the estimated H$\alpha$ flux and second by using the radio flux density, $S_\nu$ = 268.7 mJy at 6 cm (4.9 GHz) that was measured by Duncan and White (\cite{dun02}), adopting a typical error of 0.5 mJy. At 4.9 GHz the nebula is optically thin and it is assumed to be spherical with a uniform density. The $R_{S}$ in pc is given by (Vamvatira-Nakou et al. \cite{vamv13}) \begin{equation} \label{stromgren radius} R_{\mathrm{S}}=3.17\left(\frac{x_e}{\epsilon}\right)^{1/3} \left(\frac{n_\mathrm{e}}{100}\right)^{-2/3} T_4^{(0.272+0.007\mathrm{ln}T_4)}\left (\frac{Q_0}{10^{49}}\right)^{1/3}, \end{equation} where $Q_{0}$ (in photons $\mathrm{s}^{-1}$) using the H$\alpha$ flux is given by \begin{equation} \label{Q_H alpha} Q_{0(\mathrm{H\alpha})}=8.59\times10^{55} T_4^{(0.126+0.01\mathrm{ln}T_4)}D^2F_0(\mathrm{H}_{\alpha}) \; ; \end{equation} when using the radio flux it is given by \begin{equation} \label{Q_radio} Q_{0(\mathrm{radio})}=8.72\times10^{43}T_4^ {(-0.466-0.0208\mathrm{ln}T_4)}\left(\frac{\nu}{4.9} \right)^{0.1}x_e^{-1}D^2S_{\nu} \; , \end{equation} where $x_e=n_e/n_p$ is the ratio of the electron to the proton density, $\epsilon$ is the filling factor, $T_4=T_e/(10^4\ \mathrm{K})$, $\nu$ is the radio frequency in GHz, $\textit{D}$ is the distance of the nebula in kpc, $F_0(H\alpha)$ is the H$\alpha$ flux in ergs~cm$^{-2}$~s$^{-1}$, and $S_{\nu}$ is the radio flux in mJy. Assuming $x_e=1$ because the star is not hot enough to significantly ionize He and $T_4=0.635$, we found that the rate of emission of hydrogen-ionizing photons is $Q_{0(\mathrm{H\alpha})}=(1.2\ \pm\ 0.5) \times 10^{48}~\mathrm{photons~s^{-1}}$ and $Q_{0(\mathrm{radio})}=(1.0\ \pm\ 0.4) \times 10^{48}\ \mathrm{photons~s^{-1}}$. There is a good agreement between these two results within the uncertainties implying that the value of E(B$-$V) adopted for the calculations is essentially correct. The mean value $Q_{0}=(1.1\ \pm\ 0.3)\times10^{48}\ \mathrm{photons~s^{-1}}$ corresponds to an early-B star with $T_{\mathrm{eff}}\sim 26000\ \mathrm{K}$ (Panagia \cite{pan73}), which can be considered as the average spectral type of the star. We also derived $R_{S}= 1.1\ \pm\ 0.4$ pc assuming $\epsilon=1$, i.e., that the ionized gas fills the whole volume of the nebula. The fact that the nebula is a shell and not a sphere, with inner radius of about $R_{\mathrm{in}}$ = 11\arcsec\ = 0.3 pc in H$\alpha$, does not change this result because in that case the new Str\"omgren radius is $R_{S}^{'}=(R_{S}^3+R_{\mathrm{in}}^3)^{1/3}$ = 1.1 pc. The Str\"omgren radius is the radius of an ionization bounded nebula by definition. In Sect.~\ref{sec:morphology of the nebula}, it was observed that the faint part of the nebula in H$\alpha$ extends up to 0.7 pc from the central star. Moreover, the northern faint extension discussed in that section extends up to 1 pc. The comparison of these numbers with the estimated value of the Str\"omgren radius, considering the uncertainties, leads to the conclusion that the H$\alpha$ nebula may be ionization bounded. The presence of PDR signatures in the spectrum supports this conclusion. This value of the Str\"omgren radius is only an average value which can vary locally depending on the density inhomogeneities. In particular, according to the adopted morphological model, the electron density of the shell could be higher along the equator and smaller along the poles so that the ionizing radiation can reach the faint extensions or bipolar lobes. \subsubsection{Abundance ratio N/H} Given the detected emission lines in the spectrum and the lack of $[\ion{N}{iii}]$ 57\ \mbox{$\mu$m} and $[\ion{O}{iii}]$ 88\ \mbox{$\mu$m}, only an estimate of the N/H abundance number ratio can be made based on the observed H$\alpha$ 6562.8 \AA, $[\ion{N}{ii}]$ 122\ \mbox{$\mu$m} and 205\ \mbox{$\mu$m} lines and considering that \begin{equation} \label{N/H abundance} \frac{\mathrm{N}}{\mathrm{H}}=\frac{\langle \mathrm{N}^{+}\rangle }{\langle \mathrm{H}^{+}\rangle}\ . \end{equation} The flux ratios $F/F_0 (\mathrm{H}\beta)$ were calculated for the two infrared lines of $[\ion{N}{ii}]$ with the observed values of $\textit{F}$ from Table~\ref{table:2}. Using the dereddened H$\alpha$ flux, a case-B recombination with $T_\mathrm{e}=6350 $ K was assumed to calculate the H$\beta$ flux, adopting the effective recombination coefficient equations of Draine (\cite{draine11}). The ionic abundances $\mathrm{N}^{+}/\mathrm{H}^{+}$ were then derived using the package \textit{nebular}. The N/H abundance number ratio was calculated to be $(2.6\pm1.2)\times10^{-4}$, which is equivalent to a logarithmic value of 12 + log(N/H) = 8.41 $\pm$ 0.20. Considering the errors, this value is entirely compatible with that of Smith et al. (\cite{smith97_2}), which is 8.27 $\pm$ 0.05. It is significantly higher than the solar value (7.83, Grevesse et al. \cite{grev10}). \subsubsection{Mass of the ionized gas} An estimate of the ionized gas mass can be made from the H$\alpha$ and the radio emissions. The equations that are analytically derived in Vamvatira-Nakou et al. (\cite{vamv13}) are used for this calculation. For a spherical nebula the ionized mass in solar masses, taking into account the H$\alpha$ emission, is given by \begin{equation} \label{ionized mass H alpha_sphere} M_{i(\mathrm{H\alpha})}^{\mathrm{sphere}}=57.9\frac{1+4y_{+}}{\sqrt{1+y_{+}}}T_4^{(0.471+0.015\mathrm{ln}T_4)}\epsilon^{1/2} \theta^{3/2}D^{5/2}F^{1/2}_0(\mathrm{H}\alpha), \end{equation} where $\theta$ is the angular radius of the nebula ($R=\theta D$) in arcsec and $n_{\mathrm{H}^{+}}=n_{\mathrm{p}}$, $n_{\mathrm{He}^{+}}$, and $n_{\mathrm{He}^{++}}$ are the number densities of the ionized hydrogen, ionized helium, and doubly ionized helium, respectively. Assuming $n_{\mathrm{He^{++}}}=0$ and denoting $y_{+}=n_{\mathrm{He^{+}}}/n_{\mathrm{H^{+}}}$, we have $x_{\mathrm{e}}=n_{\mathrm{e}}/n_{\mathrm{p}}\simeq1+n_{\mathrm{He^{+}}}/n_{\mathrm{H^{+}}}=1+y_{+}$ and $\mu_{+}\simeq1+4\,n_{\mathrm{He^{+}}}/n_{\mathrm{H^{+}}}= 1+4y_{+}$. Considering now the radio flux and using the same formalism as above, the mass of a spherical nebula in solar masses is given by \begin{equation} \label{ionized mass radio_sphere} M_{i(\mathrm{radio})}^{\mathrm{sphere}}=5.82\times10^{-5}\frac{1+4y_{+}}{1+y_{+}}T_4^{0.175} \left(\frac{\nu}{4.9}\right)^{0.05}\epsilon^{1/2}\theta^{3/2}D^{5/2}S^{1/2}_{\nu}. \end{equation} In H$\alpha$ the nebula around AG Car is a shell with inner radius $\theta_{\mathrm{in}}$ = 11\arcsec and an average outer radius $\theta_{\mathrm{out}}$ = 20\arcsec. In the radio the nebula has approximately the same radii (Duncan and White \cite{dun02}). In this case the mass of the ionized shell nebula is given by \begin{equation} \label{ionized mass H alpha_shell} M_{i}^{\mathrm{shell}}=(\theta_{\mathrm{out}}^3-\theta_{\mathrm{in}}^3)^{1/2}\theta_{\mathrm{out}}^{-3/2}M_{i}^{\mathrm{sphere}} . \end{equation} The mass of the ionized shell nebula is thus $M_{i(\mathrm{H\alpha})}^{\mathrm{shell}} =6.9\pm2.8\ \mathrm{M}_{\odot}$ and $M_{i(\mathrm{radio})}^{\mathrm{shell}}=6.4\pm2.5\ \mathrm{M}_{\odot}$, with an average value of $M_{i}^{\mathrm{shell}} = 6.6\pm1.9\ \mathrm{M}_{\odot}$, assuming $\epsilon=1$. The assumption that the ionization of He is negligible ($y_{+} = 0$) was made because the central star has a temperature lower than 30000 K. This result is slightly higher, considering the uncertainties, than the mass of $4.2\ \mathrm{M}_{\odot}$ estimated by Nota et al. (\cite{nota92}, \cite{nota95}). \subsection{Photodissociation region characteristics} The fine structure emission lines $[\ion{O}{i}]$ 63, 146\ \mbox{$\mu$m} and $[\ion{C}{ii}]$ 158\ \mbox{$\mu$m} are among the most important coolants in PDRs (Hollenbach \& Tielens \cite{holl97}). Their detection in our spectrum may indicate the presence of a PDR in the nebula. On the other hand, a shock, which is the result of the interaction between the fast stellar wind and the slow expanding remnant of a previous evolutionary phase, could also photodissociate molecules and result in $[\ion{O}{i}]$ and $[\ion{C}{ii}]$ emission. However, the values of the calculated ratios of $[\ion{O}{i}]$ 63\ \mbox{$\mu$m}/$[\ion{O}{i}]$ 146\ \mbox{$\mu$m} and $[\ion{O}{i}]$ 63\ \mbox{$\mu$m}/$[\ion{C}{ii}]$ 158\ \mbox{$\mu$m} are in agreement with the PDR models of Kaufman et al. (\cite{kauf99}) and not with the shock models of Hollenbach and McKee (\cite{holl89}). In particular, the ratio $[\ion{O}{i}]$ 63\ \mbox{$\mu$m}/$[\ion{C}{ii}]$ 158\ \mbox{$\mu$m} is a diagnostic between PDR and shock as it is < 10 in PDRs (Tielens and Hollenbach \cite{tie85}). Consequently, based on these ratios, we can conclude that a PDR and not a shock is present in the nebula around the LBV AG Car and that it is responsible for the $[\ion{O}{i}]$ and $[\ion{C}{ii}]$ emission. Photodissociation regions were detected in the nebula that surrounds the LBV HR Car (Umana et al. \cite{uman09}) and in the nebula around the LBV candidate HD 168625 (Umana et al. \cite{uman10}). Later on, the infrared study of the LBV WRAY 15-751 also revealed the presence of a PDR in the nebula that surrounds this star (Vamvatira-Nakou et al. \cite{vamv13}). The physical conditions in the PDR can be determined using these three infrared lines, but because of the vicinity of the bright Carina nebula, we have to check if these lines come entirely from the LBV nebula or if there is a significant contribution to the measured fluxes from the background. We therefore checked the spectra of the background taken at two different positions on the sky and found that the lines $[\ion{O}{i}]$ 63\ \mbox{$\mu$m}, 146\ \mbox{$\mu$m} come entirely from the nebula. However, the flux of the line $[\ion{C}{ii}]$ 158\ \mbox{$\mu$m} is contaminated by the $[\ion{C}{ii}]$ foreground/background emission. For the nebular spectrum discussed and analyzed in this section, the background has been subtracted, as mentioned in Sect.~\ref{sec:observations and data reduction}. Nevertheless, careful examination of the two off-source spectra shows that the background is strong and not uniform. The difference between the spectra of the two off positions induces an uncertainty of at least a factor of 2 on the $[\ion{C}{ii}]$ 158\ \mbox{$\mu$m} line flux. Hence, the measured $[\ion{C}{ii}]$ 158\ \mbox{$\mu$m} flux is unreliable and the mass of hydrogen in the PDR based on the $[\ion{C}{ii}]$ flux cannot be estimated. We note that the previous conclusion about the presence of a PDR in the nebula is still valid when background contamination is taken into account. The structure of the PDR is described by the density of the atomic hydrogen, $n_\mathrm{H^0}$, and the incident FUV radiation field, $G_0$, which can be calculated using the following equation (Tielens \cite{tie05}), where it is expressed in terms of the average interstellar radiation field that corresponds to an unidirectional radiation field of $1.6 \times 10^{-3}$ erg cm$^{-2}$ s$^{-1}$, \begin{equation} G_0 = 625\frac {L_{\star}\chi}{4\pi R^2}. \end{equation} where $L_{\star}$ is the stellar luminosity, $\chi$ is the fraction of this luminosity above 6 eV, which is $\sim0.7$ for an early-B star (Young Owl et al. \cite{young02}), and $\textit{R}$ is the distance from the star. For the PDR of the AG Car nebula, the incident FUV radiation field is then $G_0\simeq3.7\times10^4$, considering that $L_{\star}=10^{6.1}L_{\odot}$ (Sect.~\ref{sec:dust continuum emission}) and $\textit{R}$ = 0.7 pc, which is the radius of the ionized gas region surrounded by the PDR. This result can be used to constrain the density of the PDR. The diagnostic diagram of the PDR models of Kaufman et al. (\cite{kauf99}, Figs. 4 and 5) give the ratios of the fluxes $F_{[\ion{O}{i}]63}/F_{[\ion{C}{ii}]158}$ and $F_{[\ion{O}{i}]145}/F_{[\ion{O}{i}]63}$ as a function of the density and the incident FUV radiation field. By using only the latter ratio and the calculated $G_0$ and considering the uncertainties, we can estimate the density of the PDR to be $\log n_\mathrm{H^0}\simeq$ 3, with a large uncertainty. To verify the consistency of the PDR analysis with the results of the dust nebula analysis, the dust temperature, $T_\mathrm{dust}$, can be estimated based on the radiative equilibrium, since the dust absorbs and re-emits the FUV radiation in the far-infrared. In case of silicates (i.e., $\beta=2$) the dust temperature is given by (Tielens \cite{tie05}) \begin{equation} T_\mathrm{dust}=50\left(\frac{1\mu\mathrm{m}}{a}\right)^{0.06} \left(\frac{G_0}{10^4}\right)^{1/6} \mathrm{K\ \ for}\ T_\mathrm{dust} < 250\ \mathrm{K} \; . \end{equation} We obtain a dust temperature of $T_\mathrm{dust}$=71 K, assuming a typical grain size of $a = 0.1\ \mu \mathrm{m}$ because the average cross-section is dominated by small grains. This result is in agreement with the results of the 2-Dust model (Sect.~\ref{sec:dust continuum emission}). \section{Discussion} \label{sec:discussion} The parameters of the LBV AG Car given in Table~\ref{table:3} summarize the measurements obtained in this work along with results taken from previous studies. The stellar parameters of luminosity, effective temperature, and distance are from Voors et al. (\cite{voo00}), Groh et al. (\cite{gro09}), Humphreys et al. (\cite{hum89_2}), and this work. The parameters for the shell include the radii, the expansion velocity (Smith \cite{smith91}), the kinematic age, the ionized gas electron density and the adopted electron temperature, the abundance ratios (N/O from Smith et al. (\cite{smith97_2}) and N/H from our study), and the measured masses of dust and gas. The \textit{Herschel}-PACS infrared images of the LBV AG Car reveal a dusty shell nebula that surrounds the central star. It is a clumpy ring with an inner radius of 0.4 pc and an outer radius of 1.2 pc. The H$\alpha$+$[\ion{N}{ii}]$ images show a gas shell nebula that coincides with the dust nebula, but seems to be slightly smaller and more elliptical. The nebula has bipolar morphology, a common feature among LBV nebulae (Weis \cite{weis01}, Lamers et al. \cite{lam01}). The nebula around AG Car lies in an empty cavity (Fig.~\ref{agcar_RGB}). If associated with the star, the cavity may correspond to a previous mass-loss event when the wind of the O-type progenitor formed a bubble, as in the case of WR stars (Marston \cite{mar96}). A similar case is the cavity observed around the LBV WRAY 15-751 (Vamvatira-Nakou et al. \cite{vamv13}), though the latter is much larger. Velocity mapping of the surrounding interstellar gas would be needed to confirm this hypothesis and derive constraints on the O-star evolutionary phase. The results of our study point to a shell nebula of ionized gas and dust, surrounded by a thin photodissociation region that is heated by an average early-B star. The dust mass-loss rate is about (1.8 $\pm$ 0.5) $\times\ 10^{-5}$ M$_{\odot}$ yr$^{-1}$, considering the duration of the enhanced mass-loss episode that was estimated from the kinematic age of the inner and outer radii of the nebula. Because we do not know the total gas mass as we cannot calculate the neutral gas mass, we must assume a gas-to-dust mass ratio in order to estimate the total mass-loss rate. A typical value for this ratio is 100 and so the gas mass will be $\sim$ 20 M$_{\odot}$. In the study of the nebula around the LBV WRAY 15-751 (Vamvatira-Nakou et al. \cite{vamv13}), this ratio was calculated to be about 40. If we assume a similar value, the gas mass will be about 10 M$_{\odot}$, higher than but comparable to the mass of the ionized gas. Adopting the average value, the gas mass of the nebula around AG Car is about 15 M$_{\odot}$ with an uncertainty of about 30\%. The total mass-loss rate is then estimated to be (1.4 $\pm$ 0.5) $\times\ 10^{-3}$ M$_{\odot}$ yr$^{-1}$. \begin{table}[t] \caption{Parameters of the LBV AG Car and its shell nebula.} \label{table:3} \centering \begin{tabular}{llc}\hline\hline\\[-0.10in] Star & log $L/L_{\odot}$ & 6.1 $\pm$ 0.2 \\ & $T_{\mathrm{eff}}$ (K) & 20000 $\pm$ 3000 \\ & $D$ (kpc) & 6.0 $\pm$ 1.0 \\ Shell & \textit{r}$_{\mathrm{in}}$ (pc) & 0.4 \\ & \textit{r}$_{\mathrm{out}}$ (pc) & 1.2 \\ & ${\rm v}_{\mathrm{exp}}$ (km s$^{-1}$) & 70 \\ & $t_{\mathrm{kin}}$ (10$^{4}$ yr) & 1.7 \\ & $n_{\mathrm{e}}$ (cm$^{-3}$) & 160 $\pm$ 90 \\ & $T_{\mathrm{e}}$ (K) & 6350 $\pm$ 400 \\ & N/O & 5.7 $\pm$ 2.2 \\ & 12+log N/H & 8.41 $\pm$ 0.20 \\ & $M_{\mathrm{dust}}$ (M$_{\odot}$) & 0.20 $\pm$ 0.05 \\ & $M_{\mathrm{ion. gas}}$ (M$_{\odot}$) & 6.6 $\pm$ 1.9 \\ \hline \end{tabular} \end{table} It is interesting to compare this mass-loss rate that corresponds to the period during which the ejection took place with recent mass-loss rates. Leitherer et al. (\cite{lei94}) found $\dot{M}(H)$ = 0.6 $\times 10^{-5}$ to 4.0 $\times 10^{-5}$ M$_{\odot}$ yr$^{-1}$ in 1990-1992 when the star luminosity was rising, showing no significant dependence on the luminosity phase. Groh et al. (\cite{gro09}) studied the fundamental parameters of AG Car during the last two periods of minimum, 1985-1990 and 2000-2003, and calculated a mass-loss rate from 1.5 $\times 10^{-5}$ to 6.0 $\times 10^{-5}$ M$_{\odot}$ yr$^{-1}$. The mass-loss rate during the nebula ejection phase thus appears roughly 50 times higher than in the present evolutionary phase. The N/O ratio of 5.7 $\pm$ 2.2 calculated by Smith et al. (\cite{smith97_2}) points to the presence of highly processed material because it is much higher than the solar abundances. It is the highest value of N/O among the known LBVs, except the case of $\eta$ Car (Smith et al. \cite{smith98}). The 12+log N/H abundance of 8.41 $\pm$ 0.20, calculated on the basis of our observations, is enhanced by a factor of 4.3 with respect to the solar abundance. It is lower than the value for the LBV $\eta$ Car and higher than the values reported for all other LBVs (Smith et al. \cite{smith98}). \begin{figure}[t] \resizebox{\hsize}{!}{\includegraphics*{agcar_nsomdot.ps}} \caption{Evolution of the N/O surface abundance ratio as a function of the mass-loss rate for a 55 M$_{\odot}$ star of solar metallicity and for initial rotation rates $\Omega/\Omega_{\rm{crit}}$ from 0 to 0.4, using the models of Ekstr\"om et al. (\cite{eks12}). The dashed lines correspond to the adopted value of N/O, with its errors, and the lower limit for the mass-loss rate. The thicker lines emphasize the part of the tracks compatible with the measurements. For clarity, the tracks are stopped during the He burning phase (data point n$^{\rm o}$ 195 in Ekstr\"om et al. \cite{eks12}). } \label{agcar_nsomdot} \end{figure} Groh et al. (\cite{gro09}) calculated the surface abundances of several chemical elements at the surface of the star. The comparison of the nebular abundances with the surface ones shows that the N/O abundance ratio of the nebula is much lower than the surface value of 39$^{+28}_{-18}$. As the authors mention, this is compatible with the idea that the nebulae around massive stars contain material that is less processed than the material of the stellar photosphere. Smith et al. (\cite{smith97_2}), based on a detailed abundances study, argued that the AG Car nebula was formed from material ejected during a RSG phase. This was also the suggestion of Voors et al. (\cite{voo00}) based on their analysis of the dusty nebula, but Lamers et al. (\cite{lam01}), in their study of the chemical composition of LBVs, concluded that the ejection occurred in a blue supergiant (BSG) phase as this can better explain the high expansion velocity. Moreover, the problem with an ejection during a RSG phase is the lack of luminous RSGs in the HR diagram. Based on our observations as well as on evolutionary models (Ekstr\"{o}m et al. \cite{eks12}), we can constrain the evolutionary path of the central star and the epoch at which the nebula was ejected, using the abundance ratios, the measured mass-loss rate, and the timescale of the ejection as constraints. The only available abundance ratio that can be used is the N/O ratio. The N/H abundance ratio is indeed sensitive to inhomogeneities of the nebula (Lamers et al. \cite{lam01}). It should also be stressed that the evolutionary models for massive stars are very uncertain at the post-main-sequence phases as they do not include any eruptive event, which means that the mass-loss rate recipes are poorly known (Smith \cite{smithN_14}). A constraint on the initial rotational velocity of AG Car can be imposed, based on the results of Groh et al. (\cite{gro11}). In their study of AG Car during two periods of visual minimum, they concluded that the progenitor did not have a high initial rotational velocity, although they measured the current projected rotational velocity to be 220 km s$^{-1}$. Their conclusions were based on the comparison with the evolutionary paths of Meynet and Maeder (\cite{mey03}). The luminosity and effective temperature of the star were found to be compatible with the evolutionary tracks of a nonrotating star with initial mass between 40 and 60 M$_{\odot}$. \begin{figure}[t] \resizebox{\hsize}{!}{\includegraphics*{agcar_hr.ps}} \caption{Evolutionary path in the HR diagram of a 55 M$_{\odot}$ star of solar metallicity and for initial rotation rates $\Omega/\Omega_{\rm{crit}}$ from 0 to 0.4, using the models of Ekstr\"om et al. (\cite{eks12}). The thicker lines emphasize the part of the tracks compatible with the N/O abundance ratio and the mass-loss rate. For clarity, the tracks are stopped during the He burning phase (data point n$^{\rm o}$ 195 in Ekstr\"om et al. \cite{eks12}).} \label{agcar_hr} \end{figure} The total mass-loss rate, estimated during the nebular ejection, is quite high but uncertain. A lower limit of the mass-loss rate can be considered, based on the sum of the dust mass and the ionized gas mass that are well determined in the nebula ring. Considering the errors, this lower limit is $\log \dot{M}$ = $-$3.4, where $\dot{M}$ is in M$_{\odot}$~yr$^{-1}$. This result along with the nebular N/O abundance ratio, which is assumed to be the surface abundance ratio at the time of the ejection, were compared to the computed evolution of the mass-loss rate versus this abundance ratio using the models of Ekstr\"{o}m et al. (\cite{eks12}) for stars of initial masses that correspond to the high stellar luminosity of AG Car, considering four different cases of stellar rotation from no rotation to a rotation rate of $\Omega/\Omega_{\rm{crit}}$=0.4. In Fig.~\ref{agcar_nsomdot}, the evolution of the mass-loss ratio versus the N/O abundance ratio is illustrated for a 55 M$_{\odot}$ star from the models of Ekstr\"{o}m et al. (\cite{eks12}). The measured N/O value, with its errors, and the measured lower limit of the mass-loss rate are plotted with dashed lines. The part of these tracks compatible with the measurements is emphasized with thicker lines. To identify at which evolutionary phase of the star this corresponds, the same parts of the tracks are reported in the HR diagram (Fig.~\ref{agcar_hr}). Our results are compatible with the evolutionary tracks of the models of Ekstr\"{o}m et al. (\cite{eks12}) for a star of 55 M$_{\odot}$ with solar metallicity and medium rotational velocity. In this case, the ejection of the nebula occurs in a post-main-sequence short-lived episode of high mass loss in agreement with the observations. We note that in such short-lived episodes, the mass-loss rate could be higher than computed from the model since the model does not account for eruptive events. For a star of 57 M$_{\odot}$ the only compatible evolutionary track is the nonrotating one. Consequently, we can conclude that the star may have a low initial rotational velocity as suggested by Groh et al. (\cite{gro11}). For a mass of 50 M$_{\odot}$, the only compatible evolutionary track is the one rotating at $\Omega/\Omega_{\rm{crit}}$ = 0.4. For a mass of 60 M$_{\odot}$, the N/O ratio is reached on the main sequence where the mass-loss rate is much smaller than our lower limit, such that no track is compatible with both the observed N/O ratio and a short-lived ($\lesssim$ 2 10$^4$ yr) high mass-loss event. A star with initial mass between 40 and 60 M$_{\odot}$ immediately evolves to a BSG without passing through the RSG phase. It then evolves towards the LBV and the WR phase (Meynet et al. \cite{mey11}). Groh et al. (\cite{gro14}) performed a detailed study on the evolutionary stages of a nonrotating star of 60 M$_{\odot}$ with solar metallicity, combining the evolutionary models of Ekstr\"{o}m et al. (\cite{eks12}) with atmospheric models. Before the WR phase, the evolutionary tracks of a 55 M$_{\odot}$ star (Fig.~\ref{agcar_hr}) with little rotation are very similar to the track of Groh et al. (\cite{gro14}) for a 60 M$_{\odot}$ without rotation in terms of effective temperature and luminosity. Making use of this result points to an ejection of the nebula during the LBV evolutionary phase of AG Car and more precisely during a cool LBV phase. Compared to the results obtained for WRAY 15-751, a lower luminosity LBV that passed through a RSG phase where the ejection of its nebula took place (Vamvatira-Nakou et al. \cite{vamv13}), this indicates that depending on their luminosity, LBV nebulae can be ejected at different evolutionary stages. It should be mentioned that de Freitas Pacheco et al. (\cite{pac92}) compared the AG Car nebular properties, based on their spectroscopic observations, with the evolutionary models available at that time, and concluded that they were consistent with the properties of a star of 60 M$_{\odot}$ at the beginning of the LBV phase. The model of the dust nebula in Sect.~\ref{sec:dust continuum emission} showed that large dust grains are necessary to reproduce the observed infrared SED, in agreement with the results of Voors et al. (\cite{voo00}). This was also the case for the dust nebulae around the LBV WRAY 15-751 (Vamvatira-Nakou et al. \cite{vamv13}) and the yellow hypergiant Hen 3-1379, a possible pre-LBV (Hutsem\'ekers et al. \cite{hut13}). Large grains ($a$ > 5 $\mu$m) have also been detected in supernovae (Gall et al. \cite{gall14}). In the case of LBVs, the stellar temperature is most often too high for dust formation to take place so that dust production can only happen during large eruptions, when a pseudo-photosphere with a sufficiently low temperature is formed. As shown by Kochanek (\cite{koch11, koch14}), large dust grains can be produced during LBV eruptions and other transients when the conditions of high mass-loss rate and low pseudo-photosphere temperature are encountered. According to these models, to produce grains larger than 10 $\mu$m, the central star should have gone through a great outburst, with a pseudo-photosphere temperature as low as 4000 K, i.e., much lower than during normal eruptions. During this event, the mass-loss rate is expected to be as high as 10$^{-2}$ M$_{\odot}$yr$^{-1}$. For AG Car, this would require a duration of the event shorter than estimated from the shell thickness, which is possible if the shell thickness is mostly due to a spread in velocity (Kochanek \cite{koch11}). \section{Conclusions} \label{sec:conclusions} The analysis of $\textit{Herschel}$ PACS imaging and spectroscopic observations of the nebula around the LBV AG Car, along with optical imaging data have been presented. The PACS images show that the dust nebula appears as a clumpy ring. It coincides with the H$\alpha$ nebula, but extends farther out. The determination of the dust parameters of the nebula was performed by dust modeling with the help of a two-dimensional radiative transfer code. This model points to the presence of both a small and a large grain population of pyroxenes with a 50/50 Fe to Mg abundance. Large grains ($a \gtrsim$ 10 $\mu$m) are needed to reproduce the observational data. The infrared spectrum of the nebula consists of forbidden emission lines over a dust continuum, without the presence of any other dust feature. These lines reveal the presence of ionized and photodissociation regions that are mixed with the dust. The derived gas abundances show a strong N/O and N/H enhancement as well as a O/H depletion, which is expected for massive evolved stars enriched with CNO-cycle processed material. The evolutionary path of the star and the epoch at which the nebula was ejected were constrained using the abundances, mass-loss rate and available evolutionary models. The results point to a nebular ejection during a cool LBV evolutionary phase of a star with initial mass of about 55 M$_{\odot}$ and with little rotation. \begin{acknowledgements} We thank the referee, Rens Waters, for his careful reading and his constructive suggestions that greatly improved the manuscript. C.V.N., D.H., P.R., N.L.J.C., Y.N. and M.A.T.G. acknowledge support from the Belgian Federal Science Policy Office via the PRODEX Programme of ESA. The Li\`ege team also acknowledges support from the FRS-FNRS (Comm. Fran{\c c}. de Belgique). PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). Data presented in this paper were analyzed using “HIPE”, a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortia. This research has made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, as well as NASA/ADS and SIMBAD (CDS/Strasbourg) databases. \end{acknowledgements}
{ "timestamp": "2015-04-14T02:15:17", "yymm": "1504", "arxiv_id": "1504.03204", "language": "en", "url": "https://arxiv.org/abs/1504.03204" }
\section{Introduction} Current machine learning systems achieve remarkable results in several challenging tasks, but are limited by the amount of human supervision required. Leveraging similarity among different problems is widely acknowledged to be a key approach to reduce the need for supervised data. Indeed, this idea is at the basis of multi-task learning, where the joint solution of different problems (tasks) has the potential to exploit tasks relatedness (structure) to improve learning accuracy. This idea has motivated a variety of methods, including frequentist \cite{micchelli04,argyriou08,argyriou08b} and Bayesian methods (see e.g. \cite{alvarez12} and references therein), with connections to structured learning~\cite{bakir07,tsochantaridis04}. \\ The focus of our study is the development of a general regularization framework to learn multiple tasks as well as their structure. Following~\cite{micchelli04,evgeniou05} we consider a setting where tasks are modeled as the components of a vector-valued function and their structure corresponds to the choice of suitable functional spaces. Exploiting the theory of reproducing kernel Hilbert spaces for vector-valued functions (RKHSvv)~\cite{micchelli04}, we consider and analyze a flexible regularization framework, within which a variety of previously proposed approaches can be recovered as special cases, see e.g. ~\cite{jacob08,lozano10,minh11,zhang10,dinuzzo11,sindhwani13}. Our main technical contribution is a unifying study of the minimization problem corresponding to such a regularization framework. More precisely, we devise an optimization approach that can efficiently compute a solution and for which we prove convergence under weak assumptions. Our approach is based on a barrier method that is combined with block coordinate descent techniques \cite{tseng01,razaviyayn13}. In this sense our analysis generalizes the results in \cite{argyriou08} for which a low-rank assumption was considered; however the extension is not straightforward, since we consider a much larger class of regularization schemes (any convex penalty). Up to our knowledge, this is the first result in multi-task learning proving the convergence of alternating minimization schemes for such a general family of problems.\\ The RKHSvv setting allows to naturally deal both with linear and non-linear models and the approach we propose provides a general computational framework for learning output kernels as formalized in \cite{dinuzzo11}.\\ The rest of the paper is organized as follows: in Sec~\ref{sec:RKHSvv} we review basic ideas of regularization in RKHSvv. In Sec.~\ref{sec:unified_perspective} we discuss the equivalence of different approaches to encode known structures among multiple tasks. In Sec.~\ref{sec:learn_joint} we discuss a general framework for learning multiple tasks and their relations where we consider a wide family of structure-inducing penalties and study an optimization strategy to solve them. This setting allows us, in Sec.~\ref{sec:examples}, to recover several previous methods as special cases. Finally in Sec.~\ref{sec:experiments} we evaluate the performance of the optimization method proposed. \paragraph{Notation.} With $S^n_{++} \subset S^n_+ \subset S^n \subset \mathbb{R}^{n \times n}$ we denote respectively the space of positive definite, positive semidefinite (PSD) and symmetric $n \times n$ real-valued matrices. $O^n$ denotes the space of orthonormal $n \times n$ matrices. For any square matrix $M\in\mathbb{R}^{n \times n}$ and $p\geq1$, we denote by $\|M\|_p = (\sum_{i=1}^n \sigma_i(M)^p)^{1/p}$ the $p$-Schatten norm of $M$, where $\sigma_i(M)$ is the $i$-th largest singular value of $M$. For any $M\in\mathbb{R}^{n \times m}$, $M^\top$ denotes the transpose of $M$. For any PSD matrix $A\in S_+^n$, $A^\dagger$ denotes the pseudoinverse of $A$. We denote by $I_n\in S_{++}^n$ the $n \times n$ identity matrix. The notation $\ensuremath{\text{\rm Ran}}(M)\subseteq\mathbb{R}^m$ identifies the range of columns of a matrix $M\in\mathbb{R}^{m \times n}$. \section{Background}\label{sec:RKHSvv} We study the problem of jointly learning multiple tasks by modeling individual task-predictors as the components of a vector-valued function. Let us assume to have $T$ supervised scalar learning problems (or tasks), each with a ``training'' set of input-output observations $\mathcal{S}_t = \{(x_{it},y_{it})\}_{i=1}^{n_t}$ with $x_{it} \in \mathcal{X}$ input space and $y_{it}\in\mathcal{Y}$ output space\footnote{To avoid clutter in the notation, we have restricted ourselves to the typical situation where all tasks share same input and output spaces, i.e. $\mathcal{X}_t = \mathcal{X}$ and $\mathcal{Y}_t \subseteq \mathbb{R}$.}. Given a loss function $\mathcal{L}:\mathbb{R}\times\mathbb{R}\to\mathbb{R}_+$ that measures the per-task prediction errors, we want to solve the following joint regularized learning problem \begin{equation}\label{eq:learning_problem} \underset{f\in\mathcal{H}}{\text{minimize}} \ \ \sum_{t=1}^T \frac{1}{n_t}\sum_{i=1}^{n_t} \mathcal{L}(y_{i}^{(t)},f_t(x_{i}^{(t)})) + \lambda \|f\|_\mathcal{H}^2 \end{equation} where $\mathcal{H}$ is an Hilbert space of vector-valued functions $f:\mathcal{X} \to \mathcal{Y}^T$with scalar components $f_t:\mathcal{X}\to\mathcal{Y}$. In order to define a suitable space of hypotheses $\mathcal{H}$, in this section we briefly recall concepts from the theory of reproducing kernel Hilbert spaces for vector-valued functions (RKHSvv) and corresponding regularization theory, which plays a key role in our work. In particular, we focus on a class of reproducing kernels (known as separable kernels) that can be designed to encode specific tasks structures (see ~\cite{evgeniou05,argyriou13} and Sec.~\ref{sec:unified_perspective}). Interestingly, separable kernels are related to ideas such as defining a metric on the output space or a label encoding in multi-label problems (see Sec.~\ref{sec:unified_perspective}) \begin{remark}[Multi-task and multi-label learning] Multi-label learning is a class of supervised learning problems in which the goal is to associate input examples with a label or a set of labels chosen from a discrete set. In general, due to discrete nature of the output space, these problems cannot be solved directly; hence, a so-called {\it surrogate} problem is often introduced, which is computationally tractable and whose solution allows to recover the solution of the original problem~\cite{steinwart08,bartlett06,mroueh12}.\\ Multi-label learning and multi-task learning are strongly related. Indeed, surrogate problems typically consist in a set of distinct supervised learning problems (or tasks) that are solved simultaneously and therefore have a natural formulation in the multi-task setting. For instance, in multi-class classification problems the ``One vs All'' strategy is often adopted, which consists in solving a set of multiple binary classification problems, one for each class. \end{remark} \subsection{Learning Multiple Tasks with RKHSvv} In the scalar setting, reproducing kernel Hilbert spaces have already been proved to be a powerful tool for machine learning applications. Interestingly, the theory of RKHSvv and corresponding Tikhonov regularization scheme follow closely the derivation in the scalar case. \begin{definition} Let $(\mathcal{H},\langle\cdot,\cdot\rangle_{\mathcal{H}})$ be a Hilbert space of functions from $\mathcal{X}$ to $\mathbb{R}^T$. A symmetric, positive definite, matrix-valued function $\Gamma: \mathcal{X} \times \mathcal{X} \to \mathbb{R}^{T \times T}$ is called a reproducing kernel for $\mathcal{H}$ if for all $x \in \mathcal{X}, c \in \mathbb{R}^T$ and $f \in \mathcal{H}$ we have that $\Gamma(x,\cdot)c \in \mathcal{H}$ and the following reproducing property holds $ \langle f(x), c\rangle_{\mathbb{R}^T} = \langle f,\Gamma(x,\cdot)c\rangle_{\mathcal{H}}. $ \end{definition} In analogy to the scalar setting, it can be proved (see~\cite{micchelli04}) that the Representer Theorem holds also for regularization in RKHSvv. In particular we have that any solution of the learning problem introduced in Eq.~\eqref{eq:learning_problem} can be written in the form \begin{equation} f(x) = \sum_{t = 1}^T \sum_{i=1}^{n_t} \Gamma(x,x_{i}^{(t)}) c_{i}^{(t)} \end{equation} with $c_i^{(t)} \in \mathbb{R}^T$ coefficient vectors.\\ The choice of kernel $\Gamma$ induces a joint representation of the inputs as well as a structure among the output components~\cite{alvarez12}; In the rest of the paper we will focus on so-called separable kernels, where these two aspects are factorized. In Section~\ref{sec:learn_joint}, we will see how separable kernels provide a natural way to learn the tasks structure as well as the tasks. \subsection{Separable Kernels} Separable (reproducing) kernels are functions of the form $\Gamma(x,x')=k(x,x')A$ \ $\forall x,x'\in\mathcal{X}$ where $k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}$ is a scalar reproducing kernel and $A\in S_+^T$ is a positive semi-definite (PSD) matrix. In this case, the representer theorem allows to rewrite problem~\eqref{eq:learning_problem} in a more compact matrix notation as \begin{equation}\label{eq:learning_problem_matrix}\tag{$\mathcal{P}$} \underset{C\in\mathbb{R}^{n \times T}}{\text{minimize}} \ \ V(Y,KCA) + \lambda \ tr(AC^\top K C). \end{equation} Here $Y\in\mathbb{R}^{n \times T}$ is a matrix with $n = \sum_{t=1}^T n_t$ rows containing the output points; $K\in S_+^n$ is the empirical kernel matrix associated to $k$ and $V:\mathbb{R}^{n \times T} \times \mathbb{R}^{n \times T} \to \mathbb{R}_+$ generalizes the loss in~\eqref{eq:learning_problem} and consists in a linear combination of the entry-wise application of $\mathcal{L}$. Notice that this formulation accounts also the situation where not all training outputs $y^{(t)}$ are observed when a given input $x \in \mathcal{X}$ is provided: in this case the functional $V$ weights $0$ the loss values of those entries of $Y$ (and the associated entries of $KCA$) that are not available in training.\\ Finally, the second term in~\eqref{eq:learning_problem_matrix} follows by observing that, for all $f\in\mathcal{H}$ of the form $f(\cdot) =\sum_{i=1}^n k(x_i,\cdot)A c_i$, the squared norm can be written as $\|f\|_\mathcal{H}^2=\sum_{i,j}^n k(x_i,x_j) c_i^\top A c_j = tr(AC^\top KC)$ where $C\in\mathbb{R}^{n \times T}$ is the matrix with $i$-th row corresponding to the coefficient vector $c_i\in\mathbb{R}^T$ of $f$. Notice that we have re-ordered the index $i$ to be in $\{1,\dots,n\}$ to ease the notation. \subsection{Incorporating Known Tasks Structure}\label{sec:unified_perspective} Separable kernels provide a natural way to incorporate the task structure when the latter is known a priori. This strategy is quite general and indeed in the following we comment on how the matrix $A$ can be chosen to recover several multi-task methods previously proposed in contexts such as regularization, coding/embeddings or output metric learning, postponing a more detailed discussion in the supplementary material. These observations motivate the extension in Sec.~\ref{sec:learn_joint} of the learning problem~\eqref{eq:learning_problem_matrix} to a setting where it is possible to infer $A$ from the data. \paragraph{Regularizers.} Tasks relations can be enforced by devising suitable regularizers~\cite{evgeniou05}. Interestingly, for a large class of such methods it can be shown that this is equivalent to the choice of the matrix $A$ (or rather its pseudoinverse) \cite{micchelli04}. If we consider the squared norm of a function $f = \sum_{i=1}^n k(x_i,\cdot)Ac_i \in\mathcal{H}$ we have (see~\cite{evgeniou05}) \begin{equation} \|f\|_\mathcal{H}^2 = \sum_{t,s=1}^T A^\dagger_{ts} \langle f_t, f_s \rangle_{\mathcal{H}_k} \end{equation} where $A_t$ is the $t$-th column of $A$, $\mathcal{H}_k$ is the RKHS associated to the scalar kernel $k$ and $f_t= \sum_{i=1}^n k(x_i,\cdot) A_t^\top c_i \in\mathcal{H}_k$ is the $t$-th component of $f$. The above equation suggests to interpret $A^\dagger$ as the matrix that models the structural relations between tasks by directly coupling different predictors. For instance, by setting $A^\dagger= I_T + \gamma (\mathbf{1}\mathbf{1}^\top)/T$, with $\mathbf{1}\in\mathbb{R}^T$ the vector of all $1$s, we have that the parameter $\gamma$ controls the variance $\sum_{t=1}^T \|\bar{f} - f_t\|_{\mathcal{H}_k}^2$ of the tasks with respect to their mean $\bar{f}=\frac{1}{T} \sum_{t=1}^T f_t$. If we have access to some notion of similarity among tasks in the form of a graph with adjacency matrix $W\in S^T$, we can consider the regularizer $\sum_{t,s=1}^T W_{t,s} \|f_t - f_s\|_{\mathcal{H}_k}^2 + \gamma \sum_{t}^T \|f_t\|_{\mathcal{H}_k}^2$ which corresponds to $A^\dagger=L + \gamma I_T$ with $L$ the graph Laplacian induced by $W$. \paragraph{Output Metric.} A different approach to model tasks relatedness consists in choosing a suitable metric on the output space to reflect the tasks structure~\cite{lozano10}. Clearly a change of metric on the output space with the standard inner product $\langle y,y\prime \rangle_{\mathbb{R}^T}$ between two output points $y,y\prime\in\mathcal{Y}^T$ corresponds to the choice of a different inner product $\langle y,y\prime\rangle_\Theta = \langle y, \theta y\prime \rangle_{\mathbb{R}^T}$ for some positive definite matrix $\Theta \in S_{++}^T$. Indeed this can be direct related to the choice of a suitable separable kernel. In particular, for the least squares loss function a direct equivalence holds between choosing a metric deformation associated to a $\Theta\in S_{++}^T$ and a separable kernel $k(\cdot,\cdot)I_T$ or use the canonical metric (i.e. with $\Theta=I_T$ the identity) and kernel $k(\cdot,\cdot)\Theta$. The details of this equivalence can be found in the supplementary material. \paragraph{Output Representation.} The tasks structure can also be modeled by designing an ad-hoc embedding for the output space. This approach is particularly useful for multi-label scenarios, where output embedding can be designed to encode complex structures such as (e.g. trees, strings, graphs, etc.)~\cite{fergus10,joachims09,crammer00}. Interestingly in these cases, or more generally whenever the embedding map $L:\mathcal{Y}^T\to\widetilde{\mathcal{Y}}$, from the original to the new output space, is linear, then it is possible to show that the learning problem with new code is equivalent to~\eqref{eq:learning_problem} for a suitable choice of separable kernel with $A=L^\top L$. We refer again to the supplementary material for the details of this equivalence. \section{Learning the Tasks and their Structure}\label{sec:learn_joint} Clearly, an interesting setting occurs when knowledge of the tasks structure is not available and therefore it is not possible to design a suitable separable kernel. In this case a favorable approach is to infer the tasks relations directly from the data. To this end we propose to consider the following extension of problem~\eqref{eq:learning_problem_matrix} \begin{equation}\label{eq:nonconvex}\tag{$\mathcal{Q}$} \begin{aligned} \underset{C\in\mathbb{R}^{n \times T}, A \in S_+^T}{\text{minimize}} & \ \ V(Y,KCA) + \lambda tr(AC^\top KC) + F(A), \end{aligned} \end{equation} where the penalty $F:S_+^T\to\mathbb{R}_+$ is designed to learn specific tasks structures encoded in the matrix $A$. The above regularization is general enough to encompass a large number of previously proposed approaches by simply specifying a choice of the scalar kernel and the penalty $F$. A detailed discussion of these connections is postponed to Section~\ref{sec:examples}. In this section, we focus on computational aspects. Throughout, we restrict ourselves to convex loss functions $V$ and convex (and coercive) penalties $F$. In this case, the objective function in~\eqref{eq:nonconvex} is separately convex in $C$ and $A$ but not jointly convex. Hence, block coordinate methods, which are often used in practice, e.g. alternating minimization over $C$ and $A$, are not guaranteed to converge to a global minimum. Our study provides a general framework to provably compute a solution to problem~\eqref{eq:nonconvex}. First, In Section \ref{CMBM}, we prove our main results providing a characterization of the solutions of Problem~\eqref{eq:nonconvex} and studying a barrier method to cast their computation as a convex optimization problem. Second, in Section \ref{BCM}, we discuss how block coordinate methods can be naturally used to solve such a problem, analyze their convergence properties and discuss some general cases of interest. \subsection{Characterization of Minima and A Barrier Method}\label{CMBM} We begin, in Section \ref{QR}, providing a characterization of the solutions to Problem \eqref{eq:nonconvex} by showing that it has an equivalent formulation in terms of the minimization of a convex objective function, namely Problem \eqref{eq:convex_equivalence}. Depending on the behavior of the objective function on the boundary of the optimization domain, Problem \eqref{eq:convex_equivalence} might not be solved using standard optimization techniques. This possible issue motivates the introduction, in Section \ref{RSd}, of a barrier method; a family of ``perturbated'' convex programs is introduced whose solutions are shown to converge to those of Problem \eqref{eq:convex_equivalence} (and hence of the original \eqref{eq:nonconvex}). \subsubsection{An Equivalent formulation for~\eqref{eq:nonconvex}}\label{QR} The objective functional in~\eqref{eq:nonconvex} is not convex, therefore in principle it is hard to find a global minimizer. As it turns out however, it is possible to circumvent this issue and efficiently find a global solution to~\eqref{eq:nonconvex}. The following result represents a first step in this direction. \begin{theorem}\label{teo:convex_equivalence} Let $K\in S_+^n$ and consider the convex set $$ \mathcal{C}=\left\{(C,A) \in \mathbb{R}^{n \times T} \times S^T_+ \ | \ \ensuremath{\text{\rm Ran}}(C^\top KC) \subseteq \ensuremath{\text{\rm Ran}}(A) \right\}. $$ Then, for any $F:S_+^T\to \mathbb{R}_+$ convex and coercive, problem \begin{equation}\label{eq:convex_equivalence}\tag{$\mathcal{R}$} \begin{aligned} \underset{(C,A) \ \in \ \mathcal{C}}{\mbox{\emph{minimize}}} &V(Y,KC)+\lambda tr\left(A^{\dagger}C^\top K C\right) + F(A) \end{aligned} \end{equation} has convex objective function and it is equivalent to \eqref{eq:nonconvex}. In particular, the two problems achieve the same minimum value and, given a solution $(C_R,A_R)$ for \eqref{eq:convex_equivalence}, the couple $(C_RA_R^\dagger,A_R)$ is a minimizer for \eqref{eq:nonconvex}. Vice-versa, given a solution $(C_Q,A_Q)$ for \eqref{eq:nonconvex}, the couple $(C_QA_Q,A_Q)$ is a minimizer for \eqref{eq:convex_equivalence}. \end{theorem} The above result highlights a remarkable connection between the problems \eqref{eq:nonconvex} (non-convex) and \eqref{eq:convex_equivalence} (convex). In particular, we have the following Corollary, which provides us with a useful characterization of the local minimizers of problem~\eqref{eq:nonconvex}. \begin{corollary}\label{cor:invexity} Let $Q:\mathbb{R}^{n \times T} \times S_+^T \to \mathbb{R}$ be the objective function of problem~\eqref{eq:nonconvex}. Then, every local minimizer for $Q$ on the open set $\mathbb{R}^{n \times T} \times S_{++}^T$ is also a global minimizer. \end{corollary} Corollary~\ref{cor:invexity} follows from Theorem~\ref{teo:convex_equivalence} and the fact that, on the restricted domain $\mathbb{R}^{n \times T} \times S_{++}^T$, the map $Q$ is the combination of the objective functional of~\eqref{eq:convex_equivalence} and the invertible function $(C,A)\longmapsto(CA,A)$. Moreover, if $Q$ is differentiable, i.e. $V$ and the penalty $F$ are differentiable, this is exactly the definition of a \textit{convexifiable} function, which in particular implies {\em invexity}~\cite{craven95}. The latter property ensures that, in the differentiable case, all the {\em stationary} points (rather than only local minimizers) are global minimizers. This result was originally proved in~\cite{dinuzzo11} for the special case of $V$ the least-squares loss and $F(\cdot)=\|\cdot\|_F^2$ the Frobenius norm; Here we have proved its generalization to all convex losses $V$ and penalties $F$.\\ We end this section adding two comments. First, we note that, while the objective function in Problem \eqref{eq:convex_equivalence} is convex, the corresponding minimization problem might not be a convex program (in the sense that the feasible set~$\mathcal{C}$ is not identified by a set of linear equalities and non-linear convex inequalities~\cite{boyd04}). Second, Corollary~\eqref{cor:invexity} holds only on the interior of the minimization domain $\mathbb{R}^{n \times T} \times S_+^T$ and does not characterize the behavior of the target functional on its boundary. In fact, one can see that both issues can be tackled defining a {\em perturbed} objective functional having a suitable behavior on the boundary of the minimization domain. This is the key motivation for the barrier method we discuss in the next section. \subsubsection{A Barrier Method to Optimize~\eqref{eq:convex_equivalence}}\label{RSd} Here we propose a barrier approach inspired by the work in~\cite{argyriou08} by introducing a perturbation of problem~\eqref{eq:convex_equivalence} that enforces the objective functions to be equal to $+\infty$ on the boundary of $\mathbb{R}^{n \times T} \times S_+^T$. As a consequence, each perturbed problem can be solved as a convex optimization constrained on a closed cone. The latter comment is made more precise in the following result that we prove in the supplementary material. \begin{theorem}\label{teo:perturbation} Consider the family of optimization problems \begin{equation}\label{eq:perturbation}\tag{$\mathcal{S}^\delta$} \begin{aligned} \underset{\substack{C\in\mathbb{R}^{n \times T}, \\ A\in S_{+}^T}}{\mbox{\emph{minimize}}} & V(Y,KC) +\lambda tr(A^{-1}(C^\top KC + \delta^2 I_T) ) + F(A) \end{aligned} \end{equation} with $I_T \in S_{++}^T$ the identity matrix. Then, for each $\delta>0$ the problem~\eqref{eq:perturbation} admits a minimum. Furthermore, the set of minimizers for~\eqref{eq:perturbation} converges to the set of minimizers for~\eqref{eq:convex_equivalence} as $\delta$ tends to zero. More precisely, given any sequence $\delta_m>0$ such that $\delta_m\to0$ and a sequence of minimizers $(C_m,A_m)\in\mathbb{R}^{n \times T} \times S_{+}^T$ for~\eqref{eq:perturbation}, there exists a sequence $(C^*_m,A^*_m)\in\mathbb{R}^{n \times T} \times S_{+}^T$ of minimizers for~\eqref{eq:convex_equivalence} such that $\|C_m-C^*_m\|_F + \|A_m-A^*_m\|_F \to0$ as $m\to+\infty$. \end{theorem} The barrier $\delta^2 tr(A^{-1})$ is fairly natural and can be seen as preconditioning of the problem leading to favorable computations. The proposed barrier method is similar in spirit to the approach developed in~\cite{argyriou08} and indeed Theorem~\ref{teo:perturbation} and next Corollary~\ref{cor:bcd} are a generalization over the two main results in~\cite{argyriou08} to any convex penalty $F$ on the cone of PSD matrices. However, notice that since we are considering a much wider family of penalties (than the trace norm as in~\cite{argyriou08}) our results cannot directly derived from those in~\cite{argyriou08}. In the next section we discuss how to compute the solution of Problem \eqref{eq:perturbation} considering a block coordinate approach. \subsection{Block Coordinate Descent Methods}\label{BCM} The characteristic block variable structure of the objective function in problem \eqref{eq:perturbation}, suggests that it might be beneficial to use block coordinate methods (BCM) (see~\cite{beck11}) to solve it. Here with BCM we identify a large class of methods that, in our setting, iterate steps of an optimization on $C$, with $A$ fixed, followed by an optimization of $A$, for $C$ fixed.\\ A {\em meta} block coordinate algorithm to solve~\eqref{eq:perturbation} is reported in in Algorithm~\ref{alg:bcd}. Here we interpret each optimization step over $C$ as a supervised step, and each optimization step over $A$ as a an unsupervised step (in the sense that it involves the inputs but not the outputs). Indeed, when the structure matrix $A$ is fixed, problem~\eqref{eq:convex_equivalence} boils down to the standard supervised multi-task learning frameworks where a priori knowledge regarding the tasks structure is available. Instead, when the coefficient matrix $C$ is fixed, the problem of learning $A$ can be interpreted as an unsupervised setting in which the goal is to actually find the underlying task structure \cite{tenenbaum10}.\\ Several optimization methods can be used as procedures for both \textsc{SupervisedStep} and \textsc{UnsupervisedStep} in Algorithm~\ref{alg:bcd}. In particular, a first class of methods is called Block Coordinate Descent (BCD) and identifies a wide class of iterative methods that perform (typically inexact) minimization of the objective function one block of variables at the time. Different strategies to choose which direction minimize at each step have been proposed: pre-fixed cyclic order, greedy search~\cite{razaviyayn13} or randomly, according to a predetermined distribution~\cite{nesterov12}. For a review of several BCD algorithms we refer the reader to~\cite{razaviyayn13} and references therein.\\ A second class of methods is called alternating minimization and corresponds to the situation where at each step in Algorithm~\ref{alg:bcd} and exact minimization is performed. This latter approach is favorable when a closed form solution exists for at least one block of variables (see Section \ref{CF}) and has been studied extensively in \cite{tseng01} in the abstract setting where an oracle provides a block-wise minimizer at each iteration. The following Corollary describes the convergence properties of BCD and Alternate minimization sequences provided by applying Algorithm~\ref{alg:bcd} to~\eqref{eq:perturbation}. \begin{algorithm}[t] \caption{\textsc{Convex Multi-task Learning}} \label{alg:bcd} \begin{algorithmic} \State {\bfseries Input:} $K, Y,\epsilon$ tolerance, $\delta$ perturbation parameter, $S$ objective functional of~\eqref{eq:perturbation}, $V$ loss, $F$ structure penalty. \State {\bfseries Initialize:} $(C,A)=(C_0,A_0), t=0$ \Repeat \State $C_{t+1} \gets$ \textsc{SupervisedStep} $(V,K,Y,C_{t},A_{t})$ \State $A_{t+1} \gets$ \textsc{UnsupervisedStep}$(F,K,\delta,C_{t+1},A_{t})$ \State $t \gets t+1$ \Until{$| S(C_{t+1},A_{t+1})-S(C_{t},A_{t}) | < \epsilon$} \end{algorithmic} \end{algorithm} \begin{corollary}\label{cor:bcd} Let the Problem~\eqref{eq:perturbation} be defined as in Theorem~\ref{teo:perturbation} then: \begin{itemize} \item[(a)] \textbf{Alternating Minimization:} Let the two procedures in Algorithm~\ref{alg:bcd} each provide a block-wise minimizer of the functional with the other block held fixed. Then every limiting point of a minimization sequence provided by Algorithm~\ref{alg:bcd}, is a global minimizer for~\eqref{eq:perturbation}. \item[(b)] \textbf{Block Coordinate Descent:} Let the two procedures in Algorithm~\ref{alg:bcd} each consist in a single step of a first order optimization method (e.g. Projected Gradient Descent, Proximal methods, etc.). Then every limiting point of a minimizing sequence provided by Algorithm~\ref{alg:bcd} is a global minimizer for~\eqref{eq:perturbation}. \end{itemize} \end{corollary} Corollary~\eqref{cor:bcd} follows by applying previous results on BCD and Alternate minimization. In particular, for the proof of part $(a)$ we refer to Theorem $4.1$ in ~\cite{tseng01}, while for part $(b)$ we refer to Theorem $2$ in~\cite{razaviyayn13}.\\ In the following we discuss the actual implementation of both \textsc{Supervised} and \textsc{Unsupervised} procedures in the case where $V$ is chosen to be least-squares loss and the penalty $F$ to be a spectral $p$-Schatten norm. This should provide the reader with a practical example of how the meta-algorithm introduced in this section can be specialized to a specific multi-task learning setting. \begin{remark}(Convergence of Block Coordinate Methods) Several works in multi-task learning have proposed some form of BCM strategy to solve the learning problem. However, up to our knowledge, so far only the authors in~\cite{argyriou08} have considered the issue of convergence to a global optimum. Their results where proved for a specific choice of structure penalty in a framework similar to that of problem~\eqref{eq:convex_equivalence} (see Section \ref{sec:examples}) but do not extend straightforwardly to other settings. Corollary~\ref{cor:bcd} aims to fill this gap, providing convergence guarantees for block coordinate methods for a large class of multi-task learning problems. \end{remark} \subsubsection{Closed Form solutions for Alternating Minimization: Examples}\label{CF} Here we focus on the alternating minimization case and discuss some settings in which it is possible to obtain a closed form solution for the procedures \textsc{SupervisedStep} and \textsc{UnsupervisedStep}. \paragraph{(\textsc{SupervisedStep}) Least Square Loss.} When the loss function $V$ is chosen to be least squares (i.e. $V(Y,Z) = \|Y-Z\|_F^2$ for any two matrices $Y,Z \in \mathbb{R}^{n \times m}$) and the structure matrix $A$ is fixed, a closed form solution for the coefficient matrix $C$ returned by the \textsc{SupervisedStep} procedure can be easily derived (see for instance~\cite{alvarez12}): $$ vec(C)=(I_T \otimes K+\lambda A^{-1} \otimes I_n)^{-1}vec(Y). $$ Here, the symbol $\otimes$ denotes the Kronecker product, while the notation $vec(M) \in \mathbb{R}^{nm}$ for a matrix $M\in\mathbb{R}^{n \times m}$ identifies the concatenation of its columns in a single vector. In~\cite{minh11} the authors proposed a faster approach to solve this problem in closed form based on Sylvester's method. \paragraph{(\textsc{UnsupervisedStep}) $p$-Schatten penalties.}\label{sec:p-schatten_penalties} We consider the case in which $F$ is chosen to be a spectral penalty of the form $F(\cdot) = \|\cdot\|_p^p$ with $p\geq1$. Also in this setting the optimization problem has a closed form solution, as shown in the following. \begin{proposition}\label{prop:p_solution} Let the penalty of problem~\eqref{eq:perturbation} be $F = \|\cdot\|_p^p$ with $p\geq1$. Then, for any $C\in\mathbb{R}^{n \times T}$ fixed, the optimization problem~\eqref{eq:perturbation} in the block variable $A$ has a minimizer of the form \begin{equation}\label{eq:p_solution} A_C^{\delta} = \sqrt[p+1]{(C^\top K C + \delta^2 I_T)/\lambda}. \end{equation} \end{proposition} Proposition~\ref{prop:p_solution} generalizes a similar result originally proved in in~\cite{argyriou08} for the special case $p=1$ and provides an explicit formula for the \textsc{UnsupervisedStep} of Algorithm~\ref{alg:bcd}. We report the proof in the supplementary material. \section{Previous Work: Comparison and Discussion}\label{sec:examples} The framework introduced in problem~\eqref{eq:nonconvex} is quite general and accounts for several choices of loss function and task-structural priors. Section~\ref{sec:learn_joint} has been mainly devoted to derive efficient and generic optimization procedures; in this section we focus our analysis on the modeling aspects, investigating the impact of different structure penalties on the multi-task learning problem. In particular, we will briefly review some multi-task learning method previously proposed, discussing how they can be formulated as special cases of problem~\eqref{eq:nonconvex} (or, equivalently, \eqref{eq:convex_equivalence}). \paragraph{Spectral Penalties.} The penalty $F = \|\cdot\|_F^2$ was considered in~\cite{dinuzzo11}, together with a least squares loss function and the non convex problem~\eqref{eq:nonconvex} is solved directly by alternating minimization. However, as pointed out in~Sec.~\ref{sec:learn_joint}, solving the non convex problem (although invex, see the discussion on Corollary~\ref{cor:invexity}) directly could in principle become problematic when the alternating minimization sequence gets close to the boundary of $\mathbb{R}^{n \times T} \times S_{++}^T$. A related idea is that of considering $F(A) = tr(A)$ (i.e. the $1$-Schatten norm). This latter approach can shown to be equivalent to the Multi-Task Feature Learning setting of~\cite{argyriou08} (see supplementary material). \begin{figure*} \begin{center} \includegraphics[width=.24\textwidth]{MTCL_d.eps} \includegraphics[width=.24\textwidth]{MTCL_T.eps} \includegraphics[width=.24\textwidth]{MTFL_d.eps} \includegraphics[width=.24\textwidth]{MTFL_T.eps} \end{center} \caption{Comparison of the computational performance of the alternating minimization strategy studied in this paper with respect to the optimization methods proposed for MTCL in~\cite{jacob08} and MTFL~\cite{argyriou08} in the original papers. Experiments are repeated for different number of tasks and input-space dimensions as described in Sec.~\ref{sec:speed}.}\label{fig:speed}% \end{figure*} \paragraph{Cluster Tasks Learning.} In \cite{jacob08}, the authors studied a multi-task setting where tasks are assumed to be organized in a fixed number $r$ of unknown disjoint clusters. While the original formulation was conceived for linear setting, it can be easily extended to non-linear kernels and cast in our framework. Let $E\in\{0,1\}^{T \times r}$ be the binary matrix whose entry $E_{st}$ has value $1$ or $0$ depending on whether task $s$ is in cluster $t$ or not. Set $M=I - E^\dagger E^\top$, and $U=\frac{1}{T}11^{\top}$. In~\cite{jacob08} the authors considered a regularization setting of the form of~\eqref{eq:convex_equivalence} where the structure matrix $A$ is parametrized by the matrix $M$ in order to reflect the cluster structure of the tasks. More precisely: $$A^{-1}(M)=\epsilon_{M}U+\epsilon_{B}(M-U)+\epsilon_{W}(I-M)$$ where the first term characterizes a global penalty on the average of all tasks predictors, the second term penalizes the between-clusters variance, and the third term controls the tasks variance within each cluster. Clearly, it would be ideal to identify an optimal matrix $A(M)$ minimizing problem~\eqref{eq:convex_equivalence}. However, $M$ belongs to a discrete non convex set, therefore authors propose a convex relaxation by constraining $M$ to be in a convex set $\mathcal{S}_c=\{ M \in S^{T}_{+} , 0\preceq M \preceq I, tr(M)=r \}$. In our notations $F(A)$ is therefore the indicator function over the set of all matrices $A = A(M)$ such that $M \in \mathcal{S}_c$. The authors propose a pseudo gradient descent method to solve the problem jointly. \paragraph{Convex Multi-task Relation Learning.} Starting from a multi-task Gaussian Process setting, in~\cite{zhang10}, authors propose a model where the covariance among the coefficient vectors of the $T$ individual tasks is controlled by a matrix $A\in S_{++}^T$ in the form of a prior. The initial maximum likelihood estimation problem is relaxed to a convex optimization with target functional of the form \begin{equation} \|Y - KC \|_F^2 + \lambda_1 \ tr(C^\top K C) + \lambda_2 \ tr(A^{-1} C^\top K C) \end{equation} constrained to the set $\mathcal{A}=\{A \ | \ A\in S_{++}^T, tr(A)=1)$. This setting is equivalent to problem~\eqref{eq:convex_equivalence} (by choosing $F$ to be the indicator function of $\mathcal{A}$) with the addition of the term $tr(C^\top K C)$. \paragraph{Non-Convex Penalties.} Often times, interesting structural assumptions cannot be cast in a convex form and indeed several works have proposed non-convex penalties to recover interpretable relations among multiple tasks. For instance~\cite{argyriou13} requires $A$ to be a graph Laplacian, or~\cite{dinuzzo13} imposes a low-rank factorization of $A$ in two smaller matrices. In~\cite{mroueh11,kumar12} different sparsity models are proposed.\\ Interestingly, most of these methods can be naturally cast in the form of problem~\eqref{eq:nonconvex} or \eqref{eq:convex_equivalence}. Unfortunately our analysis of the barrier method does not necessarily hold also in these settings and therefore Alternating Minimization is not guaranteed to lead to a stationary point. \section{Experiments}\label{sec:experiments} We empirically evaluated the efficacy of the block coordinate optimization strategy proposed in this paper on both artificial and real datasets. Synthetic experiments were performed to assess the computational aspects of the approach, while we evaluated the quality of solutions found by the system on realistic settings. \subsection{Computational Times}\label{sec:speed} As discussed in Sec.~\ref{sec:examples}, several methods previously proposed in the literature, such as Multi-task Cluster Learning (MTCL) \cite{jacob08} and Multi-task Feature Learning (MTFL~\cite{argyriou08}]), can be formulated as special cases of problem~\eqref{eq:nonconvex} or \eqref{eq:convex_equivalence}. It is natural to compare the proposed alternating minimization strategy with the optimization solution originally proposed for each method. To assess the system's performance with respect to varying dimensions of the feature space and an increasing number of tasks, we chose to perform this comparison in an artificial setting.\\ We considered a linear setting where the input data lie in $\mathbb{R}^d$ and are distributed according to a normal distribution with zero mean and identity covariance matrix. $T$ linear models $w_t \in \mathbb{R}^d$ for $t=1,\dots,T$ were then generated according to a normal distribution in order to sample $T$ distinct training sets, each comprising of $30$ examples $(x_i^{(t)},y_i^{(t)})$ such that $y_i^{(t)} = \langle w_t, x_i^{(t)}\rangle + \epsilon$ with $\epsilon$ Gaussian noise with zero mean and $0.1$ standard deviation. On these learning problems we compared the computational performance of our alternating minimization strategy and the original optimization algorithms originally proposed for MTCL and MTFL and for which the code has been made available by the authors'. In our algorithm we used $A_0 = I$ identity matrix as initialization for the alternating minimization procedure. We used a least-squares loss for all experiments.\\ Figure~\ref{fig:speed} reports the comparison of computational times of alternating minimization and the original methods to converge to the same minima (of respectively the functional of MTCL and MTFL). We considered two settings: one where the number of tasks was fixed to $T=100$ and $d$ increased from $5$ to $150$ and a second one wher $d$ was fixed to $100$ and $T$ varied bewteen $5$ and $150$. To account for statistical stability we repeated the experiments for each couple $(T,d)$ and different choices of hyperparameters while generating a new random datasets at each time. We can make two observations from these results: 1) in the setting where $T$ is kept fixed we observe a linear increase in the computational times for both original MTCL and MTFL methods, while alternating minimization is almost constant with respect to the input space dimension. 2) When $d$ is fixed and the number of tasks increases, all optimization strategies require more time to converge. This shows that in general alternating minimization is a viable option to solve these problems and in particular, when $T << min(d,n)$ -- which is often the case in non-linear settings --this method is particularly efficient. \begin{table*}[t] \scriptsize \begin{center} \hspace*{-2cm} \begin{tabular}{l c >{\columncolor{gray!35}}c c >{\columncolor{gray!35}}c c >{\columncolor{gray!35}}c c >{\columncolor{gray!35}}c c} & \multicolumn{2}{c}{50 tr. samples per class} & \multicolumn{2}{c}{100 tr. samples per class} & \multicolumn{2}{c}{150 tr. samples per class} & \multicolumn{2}{c}{200 tr. samples per class} & \tstrut \bstrut \\ & {\bf nMSE ($\pm$ std)} & \cellcolor{white} {\bf nI} & {\bf nMSE ($\pm$ std)} & \cellcolor{white} {\bf nI} & {\bf nMSE ($\pm$ std)} & \cellcolor{white} {\bf nI} & {\bf nMSE ($\pm$ std)} & \cellcolor{white} {\bf nI} & \tstrut \bstrut \\ \specialrule{.1em}{.05em}{.0em} {\bf STL} & $0.2436 \pm 0.0268$ & $0$ & $0.1723 \pm 0.0116$ & $0$ & $0.1483 \pm 0.0077$ & $0$ & $0.1312 \pm 0.0021$ & $0$ & \tstrut \bstrut \\ {\bf MTFL} & $0.2333 \pm 0.0213$ & $0.0416$ & $0.1658 \pm 0.0107$ & $0.0379$ & $0.1428 \pm 0.0083$ & $0.0281$ & $0.1311 \pm 0.0055$ & $0.0003$ & \tstrut \bstrut \\ {\bf MTRL} & $0.2314 \pm 0.0217$ & $0.0404$ & $0.1653 \pm 0.0112$ & $0.0401$ & $0.1421 \pm 0.0081$ & $0.0288$ & $0.1303 \pm 0.0058$ & $0.0071$ & \tstrut \bstrut \\ {\bf OKL} & $0.2284 \pm 0.0232$ & $0.0630$ & $0.1604 \pm 0.0123$ & $0.0641$ & $\mathbf{0.1410 \pm 0.0087}$ & $0.0350$ & $0.1301 \pm 0.0073$ & $0.0087$ & \tstrut \bstrut \\ \end{tabular} \end{center} \caption{Comparison of Multi-task learning methods on the Sarcos dataset. The advantage of learning the tasks jointly decreases as more training examples became available.} \label{tab:sarcos} \end{table*} \subsection{Real dataset} We assessed the benefit of adopting multi-task learning approaches on two real dataset. In particular we considered the following algorithms: Single Task Learning (STL) as a baseline, Multi-task Feature Learning (MTFL)~\cite{argyriou08}, Multi-task Relation Learning (MTRL)~\cite{zhang10}, Output Kernel Learning (OKL) \cite{dinuzzo11}. We used least squares loss for all experiments. \paragraph{Sarcos.} Sarcos\footnote{url{http://www.gaussianprocess.org/gpml/data/}} is a regression dataset designed to evaluate machine learning solutions for inverse dynamics problems in robotics. It consists in a collection of $21$-dimensional inputs, i.e. the joint positions, velocities and acceleration of a robotic arm with $7$ degrees of freedom and $7$ outputs (the tasks), which report the corresponding torques measured at each joint.\\ For each task, we randomly sampled $50,100,150$ and $200$ training examples while we kept a test set of $5000$ examples in common for all tasks. We used a linear kernel and performed $5$-fold crossvalidation to find the best regularization parameter according to the normalized mean squared error (nMSE) of predicted torques. We averaged the results over $10$ repetitions of these experiments. The results, reported in Table~\ref{tab:sarcos}, show clearly that to adopt a multi-task approach in this setting is favorable; however, in order to quantify more clearly such improvement, we report in Table~\ref{tab:sarcos} also the {\it normalized improvement} (\textit{nI}) over single-task learning (STL). For each multi-task method MTL, the normalized improvement nI(MTL) is computed as the average $$ \mbox{nI(MTL)} = \frac{1}{n_{exp}} \sum_{i=1}^{n_{exp}} \frac{\mbox{nMSE}_i(\mbox{STL})-\mbox{nMSE}_i(\mbox{MTL})}{\sqrt{\mbox{nMSE}_i(\mbox{STL})\cdot\mbox{nMSE}_i(\mbox{MTL})}} $$ over all the $n_{exp} = 10$ experiments of the normalized differences between the nMSE achieved by respectively the STL approach and the given multi-task method MTL. \begin{table} \scriptsize \begin{center} \rowcolors{3}{}{gray!35} \begin{tabular}{lcccccc} & \multicolumn{6}{c}{\bf Accuracy (\%) per \# tr. samples per class} \tstrut \bstrut \\ & \multicolumn{2}{c}{$50$} & \multicolumn{2}{c}{$100$} & \multicolumn{2}{c}{$150$} \tstrut \bstrut \\ \specialrule{.1em}{.05em}{.0em} {\bf STL} & $72.23$ & $\pm 0.04$ & $76.61$ & $\pm 0.02$ & $79.23$ & $\pm 0.01$ \tstrut \bstrut \\ {\bf MTFL} & $73.23$ & $\pm .08$ & $77.24$ & $\pm .05$ & $80.11$ & $\pm .03$ \tstrut \bstrut \\ {\bf MTRL} & $73.13$ & $\pm 0.08$ & $77.53$ & $\pm 0.04$ & $80.21$ & $\pm 0.05$ \tstrut \bstrut \\ {\bf OKL} & $72.25$ & $\pm 0.03$ & $77.06$ & $\pm 0.01$ & $80.03$ & $\pm 0.01$ \tstrut \bstrut \\ \end{tabular} \end{center} \caption{Classification results on the $15$-scene dataset. Four multi-task methods and the single-task baseline are compared.} \label{tab:15scenes} \end{table} \paragraph{$15$-Scenes.} $15$-Scenes\footnote{http://www-cvr.ai.uiuc.edu/ponce\_grp/data/} is a dataset designed for scene recognition, consisting in a $15$-class classification problem. We represented images using LLC coding~\cite{wang10} and trained the system on a training set comprising $50$, $100$ and $150$ examples per class. The test set consisted in $7500$ images evenly divided with respect to the $15$ scenes. Table~\ref{tab:15scenes} reports the mean classification accuracy on $20$ repetitions of the experiments. It can be noticed that while all multi-task approach seem to achieve approximately similar performance, these are consistently outperforming the STL baseline. \section{Conclusions} We have studied a general multi-task learning framework where the tasks structure can be modeled compactly in a matrix. For a wide family of models, the problem of jointly learning the tasks and their relations can be cast as a convex program, generalizing previous results for special cases~\cite{argyriou08,dinuzzo11}. Such an optimization can be naturally approached by block coordinate minimization, which can be seen as alternating between supervised and unsupervised learning steps optimizing respectively the tasks or their structure. We evaluated our method real data, confirming the benefit of multi-task learning when tasks share similar properties.\\ From an optimization perspective, future work will focus on studying the theoretical properties of block coordinate methods, in particular regarding convergence rates. Indeed, the empirical evidence we report suggests that similar strategies can be remarkably efficient in the multi-task setting. From a modeling perspective, future work will focus on studying wider families of matrix-valued kernels, overcoming the limitations of separable ones. Indeed, this would allow to account also for structures in the interaction space between the input and output domains jointly, which is not the case for separable models. {\small \bibliographystyle{ieee}
{ "timestamp": "2015-04-21T02:00:40", "yymm": "1504", "arxiv_id": "1504.03101", "language": "en", "url": "https://arxiv.org/abs/1504.03101" }
\section{Introduction}\label{sec:introduction} The growth of the Default-Free Zone (DFZ) routing tables~\cite{rfc4984} and associated churn observed in recent years has led to much debate as to whether the current Internet infrastructure is architecturally unable to scale. Sources of the problem were found to be partly organic, generated by the ongoing growth of the topology, but also related to operational practices which seemed to be the main drivers behind prefix deaggregation within the Internet's core. Diverging opinions as to how the latter could be solved triggered a significant amount of research that finally materialized in several competing solutions (see~\cite{rfc6115} and the references therein). In this paper we focus on location/identity separation type of approaches in general, and consider the Locator/ID Separation Protocol (LISP)~\cite{saucez:lisp} as their particular instantiation. LISP semantically decouples identity from location, currently overloaded by IP addresses, by creating two separate namespaces that unambiguously address end-hosts (identifiers) and their Internet attachment points (locators). This new indirection level has the advantage that it supports the implementation of complex traffic engineering mechanisms but at the same time enables the locator space to remain quasi-static and highly aggregatable~\cite{rfc7215}. Although generally accepted that location/identity type of solutions alleviate the scalability limitations of the DFZ, they also push part of the forwarding complexity to the edge domains. On the one hand, they require mechanisms to register, distribute and retrieve bindings that link elements of the two new namespaces. On the other, LISP routers must store in use mappings to speed-up packet forwarding and to avoid generating floods of resolution requests. This then begs the question: \emph{does the newly introduced LISP edge cache scale?} This paper provides an analytical answer by analyzing the scalability of the LISP cache with respect to the growth of the Internet and growth of the LISP site. To this end we leverage the working-set theory~\cite{denning:ws_model} and previous results that characterize temporal locality of reference strings~\cite{breslau:web_and_zipf, jin:web_tloc} to develop a model that relates the LISP cache size with the miss-rate. We find that the relation between cache-size and miss-rate only depends on the popularity distribution of destination prefixes. Additionally, for a given miss rate, as long as the popularity follows a Generalized-Zipf distribution, the LISP cache size scales constantly O(1) with respect to the growth of the Internet and the number users, if the last two do not influence the popularity distribution. If this does not hold then the cache scales linearly O(N). To support our results, we also analyze the popularity distribution of destination prefixes in several one day real-world packet traces, from two different networks and spanning a period of $3.5$ years. The rest of the paper is structured as follows. We provide a brief overview of LISP in Section~\ref{sec:background}. In Section~\ref{sec:cache_model} we derive the cache model under a set of assumptions and thereafter discuss its predictions and implications for LISP. In Section~\ref{sec:evaluation} we present empirical evidence that supports our assumptions and evaluate the model, while in Section~\ref{sec:rw} we discuss the related work. Finally, we conclude the paper in Section~\ref{sec:conclusions}. \section{LISP Background}\label{sec:background} LISP~\cite{saucez:lisp} belongs to the family of proposals that implement a location/identity split in order to address the scalability concerns of the current Internet architecture. The protocol specification has recently undergone IETF standardization~\cite{rfc6830}, however development and deployment efforts are still ongoing. They are supported by a sizable community spanning both academia and industry and rely for testing on a large experimental network, the LISP-beta network~\cite{lisp:testbed}. The goal of splitting location and identity is to insulate core network routing that should ideally only be aware of location information (locators), from the dynamics of edge networks, which should be concerned with the delivery of information based on identity (identifiers). To facilitate the transition from the current infrastructure, LISP numbers both namespaces using the existing IP addressing scheme, thus ensuring that routing within both core and stub networks stays unaltered. However, as locators and identifiers bear relevance only within their respective namespaces, a form of conversion from one to the other must be performed. LISP makes use of encapsulation~\cite{rfc1955} and a directory service to perform such translation. \begin{figure}[t] \centering \includegraphics[width=80mm,keepaspectratio=true]{lisp-arch.pdf} \caption{Example packet exchange between $EID_{SRC}$ and $EID_{DST}$ with LISP. Following intra-domain routing, packets reach $xTR_{A}$ which obtains a mapping binding $EID_{DST}$ to $RLOC_{B1}$ and $RLOC_{B2}$ from the mapping-system (steps 1-3). From the mapping, $xTR_{A}$ chooses $RLOC_{B1}$ as destination and then forwards towards it the encapsulated packets over the Internet's core (step 4). $xTR_{B}$ decapsulates the packets and forwards them to their intended destination. } \label{fig:lisp-arch} \end{figure} Prior to forwarding a host generated packet, a LISP router maps the destination address, or Endpoint IDentifier (EID), to a corresponding destination Routing LOCator (RLOC) by means of a LISP specific mapping system~\cite{draft:lisp-ddt,jakab:lisp-tree}. Once a mapping is obtained, the border router tunnels the packet from source edge to corresponding destination edge network by means of an encapsulation with a LISP-UDP-IP header. The outer IP header addresses are the RLOCs pertaining to the corresponding border routers (see Fig.~\ref{fig:lisp-arch}). At the receiving router, the packet is decapsulated and forwarded to its intended destination. In LISP parlance, the source router, that performs the encapsulation, is called an Ingress Tunnel Router (ITR) whereas the one performing the decapsulation is named the Egress Tunnel Router (ETR). One that performs both functions is referred to as an xTR. Since the packet throughput of an ITR is highly dependent on the time needed to obtain a mapping, but also to avoid overloading the mapping-system, ITRs are provisioned with map-caches that store recently used EID-prefix-to-RLOC mappings. Stale entries are avoided with the help of timeouts, called \emph{time to live} (TTL), that mappings carry as attributes. Whereas, consistency is ensured by proactive LISP mechanisms through which the xTR owner of an updated mapping informs its peers of the change. Intuitively, the map-cache is most efficient in situations when destination EIDs present high temporal and/or spatial locality and its size depends on the diversity of the visited destinations. As a result, performance depends entirely on map-cache provisioned size, traffic characteristics and the eviction policy set in place. \section{Cache Model}\label{sec:cache_model} We start this section by discussing some of the fundamental properties of network traffic that may be exploited to gain a better understanding of cache performance. Then, assuming these properties are characteristic to real network traces we devise a cache model. Finally we analyze and discuss the predictions of the model. \subsection{Sources of Temporal Locality in Network Traffic}\label{sec:temp_locality} We consider the following formalization of traffic, either at Web page or packet level, throughout the rest of the paper. Let $D$ be a set of objects (Web pages, destination IP-prefix, program page etc.). Then, we consider traffic to be a strings of references $r_1, r_2, \dots, r_i\dots$ where $r_i = o \in D$ is a reference at the $i$th unit of time that has as destination, or requests, object $o$. Generally, we consider the length of the reference string to be $N$. Also, note that we use object and destination interchangeably. Two of the defining properties of reference strings, important in characterizing cache performance, are the heavy tailed \emph{popularity distribution} of destinations and the \emph{temporal locality} exhibited by the requests pattern. We discuss both in what follows. \subsubsection{Popularity Distribution} copious amounts of studies in fields varied as linguistics~\cite{zipf:principle_least_effort, montemurro:gzipf}, Web traffic~\cite{breslau:web_and_zipf,mahanti:web_proxy_hierarchy}, video-on-demand~\cite{cha:itube}, p2p overlays~\cite{dan:plaw} and flow level traffic~\cite{sarrar:leverage_zipf} found the probability distribution of objects to have a positive skew. Generally, such distributions are coined Zipf-like, i.e., they follow a power law; whereby the probability of reference is inversely proportional to the rank of an object. Generally, the relation is surmised as: $\nu(k) = \dfrac{\Omega}{k^\alpha}$ \noindent where $\nu$ is the frequency, or number of requests observed for an object, $k$ is the rank, $\Omega=1/H(n,\alpha)$ is a normalizing constant and $H(n,\alpha)$ is the $n^{th}$ generalized harmonic number. It is interesting to note that although Zipf's law has its origins in linguistics, it was found to be a poor fit for the statistical behavior of words frequencies with low or mid-to-high values of the rank variable. That is, it does not fit the head and tail of the distribution. Furthermore, it's extension due to Mandelbrot (often called the Zipf-Mandelbrot law) only improves the fitting for the head of the distribution. Such discrepancies were also observed for Web based and p2p reference strings. Often the head of the distribution is flattened, i.e., frequency is less than the one predicted by the law, or the tail has an exponential cutoff or a faster power law decay~\cite{montemurro:gzipf, dan:plaw}. But these differences are usually dismissed on the basis of poor statistics in the high ranks region corresponding to objects with a very low frequency. Nevertheless, Montemurro solved recently the problem in linguistics by extending the Zipf-Mandelbrot law such that for high ranks the tail undergoes a crossover to an exponential or larger exponent power-law decay. Surprisingly, he found this features, i.e. deviations from the Zipf-like behavior, to hold especially well when very large corpora~\cite{montemurro:gzipf} are considered. We further refer to this model as the Generalized Zipf law or GZipf and, in light of these observations, we assume the following: \begin{assumption}\label{prop:gzipf} The popularity distribution of destination IP-prefix reference strings can be approximated by a GZipf distribution. \end{assumption} \subsubsection{Temporal locality} can be informally defined as the property that a recently referenced object has an increased probability of being re-referenced. One of the well established ways of measuring the degree of locality of reference strings is the inter-reference distance distribution. Breslau et al. found in ~\cite{breslau:web_and_zipf} that strings generated according to the Independent Reference Model (IRM), that is, assuming that references are independent and identically distributed random variables, from a popularity distribution have an inter-reference distribution similar to that of the original string. Additionally, they inferred that the probability of an object being re-referenced after $t$ units of time is proportional to $1/t$. Later, Jin and Bestavros proved that in fact temporal locality emerges from both long-term popularity and short-term correlations. However, they found that the inter-reference distance distribution is mainly induced through long-term popularity and therefore is insensitive to the latter. Additionally, they showed that by ignoring temporal correlations and assuming a Zipf-like popularity distribution, an object's re-reference probability after $t$ units of time is proportional to $1/t^{(2-1/\alpha)}$. These observations then lead to our second assumption: \begin{assumption}\label{prop:tloc} Temporal locality in destination IP-prefix reference strings is mainly due to the prefix popularity distribution. \end{assumption} We contrast the two assumptions with the properties of several packet-level traces in~\ref{sec:evaluation}. In what follows we are interested in characterizing the inter-reference distribution of a GZipf distribution and further on the cache miss rate using the two statements as support. \subsection{GZipf generated inter-reference distribution} In this section we compute the inter-reference distance distribution for a GZipf popularity. The result is an extension of the one due to Jin and Bestavros for a Zipf-like popularity. As a first step we compute the inter-reference distribution for a single object and then by integration obtain the average for the whole reference string, which we denote by $f(t)$. If $\nu$ is the normalized frequency, namely, the number of reference to an object divided by the length of the reference string $N$, then, as shown in~\cite{montemurro:gzipf} the probability of observing objects with frequency $\nu$ in the reference string is: \begin{equation} p_{\nu}(\nu)\propto\dfrac{1}{\mu\nu^r+(\lambda-\mu)\nu^q} \label{eq:gzipf_pdf} \end{equation} \noindent where $1 \le r<q$ are the exponents that control the slope of the power laws in the two regimes and $\mu$ and $\lambda$ are two constants that control the frequency for which the tail undergoes the crossover. From Assumption~\ref{prop:tloc} it follows that references to an object are independent whereby the inter-reference distance $t$ is distributed exponentially with expected value of $1/\nu$. Then, if we denote by $d(t, \nu)$ the number of times the inter-reference distance for an object with frequency $\nu$ is $t$, we can write: \begin{equation} d(t, \nu) \sim (\nu N-1) \nu e^{-\nu t} \label{eq:dtf} \end{equation} If $\nu_{min}$ and $\nu_{max}$ are the minimum and respectively the maximum normalized frequency observed for the reference string, we can compute the inter-reference distance for the whole string as: \begin{eqnarray} f(t) &\sim& \int_{\nu_{min}}^{\nu_{max}} p_{\nu}(\nu)\,d(t,\nu) \mathrm{d}\nu \nonumber \\ &=& \int_0^1 \dfrac{(\nu N-1)\nu e^{-\nu t} }{\mu\nu^r+(\lambda-\mu)\nu^q} \mathrm{d}\nu \label{eq:ir} \end{eqnarray} Unfortunately, the integral is unsolvable, nevertheless, we can still characterize the properties of $f(t)$ in the two regimes of the GZipf distribution. In the high frequency region, where term having $q$ as exponent dominates the denominator we can write: \begin{eqnarray} f_q(t) &\sim& \int_{\nu_k}^1 \dfrac{\nu^2\ e^{-\nu t} }{\nu^q} \mathrm{d}\nu \nonumber \\ &=& \dfrac{\Gamma(3-q, \nu_k t)}{t^{3-q}} \label{eq:ir_q} \end{eqnarray} \noindent where, $\Gamma(n,z) =\int_z^\infty x^{n-1} e^{-x}\mathrm{d}x$ is the incomplete Gamma function. $\nu_k=(\mu/(\lambda-\mu))^{1/(q-r)}$ is the frequency for which the two terms that make up the denominator are equal. It is useful to note that for low $t$ values that correspond to high frequencies the nominator presents a constant plateau that quickly decreases, or bends, at the edge as $t\to 1/\nu_{k}$. Therefore, we can approximate: \begin{equation} f_q(t)\sim \dfrac{1}{t^{3-q}} \label{eq:fq_asym} \end{equation} Similarly, it may be shown that for low frequencies, that is, in the region where term with $r$ as exponent dominates: \begin{equation} f_r(t)\sim \dfrac{1}{t^{3-r}} \label{eq:fr_asym} \end{equation} Finally, we conclude that the inter-reference distance distribution can be approximated by a piece-wise power-law. Our result is similar to the single sloped power-law obtained by Jin under the assumption of Zipf distributed popularity or the empirical observations by Breslau et. al in~\cite{breslau:web_and_zipf} for Web reference strings. However, due to its general form it should be able to capture the properties of more varied workloads. In the following section we use the inter-reference distance distribution together with the working-set theory to deduce the miss rate of an LRU cache. \subsection{A Cache Model} Denning proposed the use of the working-set as a tool to capture the set of pages a program must store (cache) in memory such that it may operate at a desired level of efficiency~\cite{denning:ws_model}. The idea is to estimate a program's locality, or in-use pages, with the help of a sliding window of variable length looking into the past of the reference string. In their seminal work characterizing the properties of the working-set~\cite{denning:ws_properties}, Denning and Schwartz showed that the average inter-reference distance is the slope of the average miss rate, which at its turn is the slope of the average working-set size, both taken as functions of the window size. The result is of particular interest as it provides a straightforward link between the properties of the reference string and the performance of a cache that uses the least recently used (LRU) eviction policy but whose size varies. To understand the latter consider that the size of the working-set for a given window depends on the number of unique destinations within the window, which may vary. Still, under the condition that the reference string is obtained with IRM, the working-set size will be normally distributed with a low variance. We can approximate it as being constant and as a result the cache modeled by the working-set becomes an LRU of fixed size. We leverage in what follows the result above to deduce miss rate of an LRU cache when fed by a reference string obtained using IRM and a GZipf popularity distribution. The miss rate for the upper part of $f(t)$ is: \begin{equation} m_q(t) = - \int \dfrac{C}{t^{3-q}} \mathrm{d}t = - C \dfrac{t^{q-2}}{q-2} \label{eq:mtq} \end{equation} \noindent where, $t < 1/\nu_k$, $1<q<2$ and $C$ is a normalizing constant which ensures that $\sum\limits_{t=1}^{N-1} C f(t) = 1$. We can further compute the average working-set size as: \begin{equation} s_q(t) = \int C\dfrac{t^{q-2}}{q-2} \mathrm{d}t = -C \dfrac{t^{q-1}}{(q-1)(q-2)} \label{eq:stq} \end{equation} To obtain the miss rate as a function of the cache size, not of the inter-reference distance, we take the inverse of $s_q$ and replace it in (\ref{eq:mtq}). For $s < s_q(1/\nu_k)$ we get: \begin{eqnarray} m_q(s) &=& C^{\dfrac{1}{q-1}} (2-q)^{-\dfrac{1}{q-1}} (q-1)^{\dfrac{q-2}{q-1}} s^{\dfrac{q-2}{q-1}} \nonumber \\ &\propto& s^{1-\dfrac{1}{q-1}} \label{eq:msq} \end{eqnarray} This suggests that the asymptotic miss rate as a function of cache size is a power law of the cache size with an exponent dependent on the slope of the popularity distribution. Similarly, for large inter-reference distances, when $s>s_r(1/\nu_k)$: \begin{equation} m_r(s) \propto s^{1-\dfrac{1}{r-1}} \label{eq:msr} \end{equation} Then, for a reference string whose destinations have a GZipf popularity distribution and where the references to objects are independent, we find that the miss rate presents two power-law regimes with exponents only dependent on the exponents of the popularity distribution and the cache size. We test the ability of the equations to fit empirical observations in~\ref{sec:cache_res}. \subsection{Cache Performance Analysis}\label{sec:cache_bounds} We now investigate how cache size varies with respect to the parameters of the model if the miss rate is held constant. By inverting (\ref{eq:msq}) and (\ref{eq:msr}) we obtain the cache size as a function of the miss rate: \begin{equation} s(m)= \begin{cases} g(q)\,m^{1-\dfrac{1}{2-q}}, & \quad m \le m_k \\ g(r)\,m^{1-\dfrac{1}{2-r}}, & \quad m > m_k \\ \end{cases} \label{eq:sm_both} \end{equation} \noindent with $g(x)=-C^{\frac{1}{2-x}}\dfrac{ (2-x)^{\frac{x-1}{x-2}} }{2-3x+x^2}$, $m_k = \dfrac{C}{\nu_k^{r-2} (2-r)}$, $\nu_k=\left(\dfrac{\mu}{\lambda-\mu}\right)^{q-r}$ and $0<m<1$. We see that $s(m)$ is \emph{independent} of both the number of packets $N$ and the number of destinations $D$ and is sensible only to changes of the slopes of the popularity distribution $q$, $r$ and the frequency at which the two slopes intersect, $\nu_k$. We do note that $C$ does depend analytically on $N$ as it can be seen by considering $C$'s defining expression (see discussion of~(\ref{eq:mtq})): $1/C=H(1/\nu_k,3-q)-\zeta(3-r,N)+\zeta(3-r, 1/\nu_k)$ where $H(n,m)=\sum\limits_{k=1}^n 1/k^m$ is the generalized harmonic number of order $n$ of $m$ and $\zeta(s,a)=\sum\limits_{k=0}^\infty 1/(k+a)^s$ the Hurwitz Zeta function. However, the first and last terms of the expression depend only on popularity parameters while the middle one quickly converges to a constant as $N$ grows. Whereby it is safe to assume $C$ constant with respect to $N$ and consequently that the number of packets does not influence $s(m)$. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth,keepaspectratio=true]{cs_vs_q.pdf} \caption{Cache size as a function of a GZipf exponent for a fixed miss rate} \label{fig:cs_vs_q} \end{figure} On the other hand, if the parameters of the popularity distribution are modified, some interesting dependencies can be uncovered. For brevity, we explore only the case when $q$ and $r$ vary but still respect the constraint that $1<r<q<2$. When both exponents jointly change, the cache size required to maintain the miss rate will qualitatively vary as depicted in Fig.~\ref{fig:cs_vs_q}. Specifically, as their value approach $1$, that is, when the popularity distribution is strongly skewed, cache size asymptotically goes to a low value constant, whereas when the exponent approaches $2$, the required cache size grows very fast, notice the superlinear growth in the log-log scale. Despite not being indicated by (\ref{eq:sm_both}), $s(m)$ is defined when $q$ or $r$ are $2$, that is, it does not grow unbounded. The expression can be obtained if we replace $q$ by $2$ in~(\ref{eq:mtq}) and recompute all equations: \begin{equation} s(m) = (C+m)\, e^{-\dfrac{m}{C}} \label{sm_log} \end{equation} \subsection{Discussion of Asymptotic Cache Performance and Impact} Using the results of the analysis performed in the previous section we are now interested to characterize the asymptotic scalability of the LISP cache size with respect to (i) the number of users in a LISP site (ii) the size of the EID space and (iii) the parameters of the popularity distribution. To simplify the discussion, we assume there are no interactions between the first two and the third: \begin{assumption} The destination prefix popularity distribution is independent of the number of users in a LISP site and the size of the EID space. \end{assumption} Whereby (i) contemplates the variation of the number of packets, $N$ (ii) the variation of the number of destinations $D$ and (iii) the variation of the GZipf parameters $q$, $r$, $\mu$ and $\lambda$, independently. We acknowledge that the popularity distribution may be influenced by a multitude of factors, and in particular by the growth of the users generating the reference string. Nonetheless, we argue that our assumption does make practical sense. For instance, a typical LISP router is expected to serve hundreds to thousands of clients so fluctuations proportional to the size of the user set should not affect overall homogeneity and popularity distribution. Additionally, although user interest in content quickly changes, the same is not necessarily true for the content sources, i.e., prefixes from where the content is served, which the user cannot typically select. This split between content and its location can result in relatively stable popularity distribution of the prefixes despite the dynamic popularity of actual content. We show an example network where this assumption holds in Section~\ref{sec:pop_assumption}. In the previous section we found that when the parameters of the popularity distribution are held constant, the cache size is independent of both the number of packets and destinations. As a result, cache size scales constantly, O(1), with the number of users within a LISP site and the size of EID-prefix space for a fixed miss rate. This observation has several fundamental implications for LISP's deployment. First, caches for LISP networks can be designed and deployed for a desired performance level which subsequently does not degrade with the growth of the site and the growth of the Internet address space. Second, splitting traffic between multiple caches (i.e., routers) for operational purposes, within a large LISP site, does not affect cache performance. Finally, signaling, i.e., the number of Map-Request exchanges, grows linearly with the number of users if no hierarchies or cascades of caches are used. This because the number of resolution requests is $m(s)\,N$. If the previous assumption does not hold, then, in the worst case, the cache size scales linearly with $|D|$. This follows if we consider that, as the growth of $N$ and $D$ flatten the distribution, thus leading to a uniform popularity, the cache size for a desired miss rate becomes proportional to the $|D|$. \section{Empirical Evidence of Temporal Locality}\label{sec:evaluation} In this section we verify the accuracy of our assumptions regarding the popularity distribution of destination prefixes and the sources of locality in network traffic. We also verify the accuracy of the predictions regarding the performance of the LISP cache empirically. But first, we present our datasets and experimental methodology. \subsection{Packet Traces and Cache Emulator} We use four one-day packet traces that only consist of egress traffic for our experiments. Three were captured at the 2Gbps link that connects our University's campus network to the Catalan Research Network (CESCA) and span a period of 3.5 years, from 2009 to 2012. The fourth was captured at the 10Gbps link connecting CESCA to the Spanish academic network (RedIris) in 2013. UPC campus has about 36k users consisting generally of students, academic staff and auxiliary personnel while CESCA provides transit services for 89 institutions that include the public Catalan schools, hospitals and universities. The important properties of the datasets are summarized in Table~\ref{tab:traces}. \begin{table*}[t] \centering \caption{Datasets Statistics} \label{tab:traces} \begin{tabular}{lcccc} \toprule & \textbf{upc 2009} & \textbf{upc 2011} & \textbf{upc 2012} & \textbf{cesca 2013} \\ \midrule[0.09em] Date & 2009-05-26 & 2011-10-19 & 2012-11-21 & 2013-01-24\\ \midrule Packets & 6.5B & 4.05B & 5.57B & 20B \\ \midrule Av. pkt/s & 75.3k & 46.9k & 64.4k & 232k \\ \midrule Prefixes & 92.8k & 94.9k & 109.4k & 143.7k \\ \midrule Av. pref/s & 2.3k & 1.95k & 2.1k & 2.56k \\ \bottomrule \end{tabular} \end{table*} \begin{table*}[t] \centering \caption{Routing Tables Statistics} \label{tab:rt} \begin{tabular}{lcccc} \toprule & \textbf{upc 2009} & \textbf{upc 2011} & \textbf{upc 2012} & \textbf{cesca 2013} \\ \midrule[0.09em] $\texttt{BGP}_{RT}$ & 288k & 400k & 450k & 455k \\ \midrule $\texttt{BPG}_{\phi}$ & 142k & 170k & 213k & 216k \\ \midrule $\rho$ & 0.65 & 0.55 & 0.51 & 0.66 \\ \bottomrule \end{tabular} \end{table*} At the time of this writing there exists no policy as to how EID-prefixes are to be allocated. However, it is expected and also the practice today in the LISP-beta network to allocate EIDs in IP-prefix-like blocks. Consequently we performed our analysis considering EID-prefixes to be of BGP-prefix granularity. For each packet within a trace we find the associated prefix using BGP routing tables downloaded form the RouteView archive~\cite{routeviews} that match the trace's capture date. We filtered out the more specific prefixes from the routing tables as they are generally used for traffic engieering and LISP offers a more efficient management of these operational needs. Table~\ref{tab:rt} gives an overview of the original ($\texttt{BGP}_{RT}$), and filtered $\texttt{BGP}_{\phi}$ routing table sizes as well as the ratio ($\rho$) between the filtered routing table size and the the number of prefixes observed within each trace. Both UPC and CESCA visit daily more than half of the prefixes within $\texttt{BGP}_{\phi}$. Apart from the popularity and temporal locality analysis we also implemented an LISP ITR emulator to estimate LRU map-cache performance using the traces and the routing tables as input. We compare the predictions of our cache model with the empirical results in~\ref{sec:cache_res}. \subsection{Popularity Distribution}\label{sec:pop_assumption} Figure~\ref{fig:pop} presents the frequency-rank distributions of our datasets for both absolute and normalized frequency. A few observations are in place. First, although clearly not accuretely described by Zipf's law, they also slightly deviate from a GZipf. Namely, the head of the distribution presents two power-law regiemes followed by a third that describes the tail as it can be seen in Fig.~\ref{fig:pop} (down). This may be either because a one day sample is not enough to obtain accurate statistics in the Zipf-Mandelbrot head reagion, or because popularity for low ranks follows a more complex law. Still, we find that for all traces the frequencies of higher ranks (above 2000) are accurately characterized by two power-law regiemes (see Fig.~\ref{fig:pop_fit}). Secondly, the frequency-rank curves for the UPC datasets are remarkably similar. Despite the $50\%$ increase of $\texttt{BGP}_{\phi}$ (i.e., $D$), changes in the Internet content provider infrastructure over a $3.5$ years period, and perhaps even changes in the local user set, the popularity distributions are roughly the same. Finally, the normalized frequency plots for all traces are similar, in spite of the large difference in number of packets between CESCA and UPC datasets. These observations confirm our assumption that growth of the number of users within the site or of the destination space do not necessarily result in a change of the popularity distribution. To confirm that these results are not due to a bias of popularity for larger prefixes sizes, that is, larger prefixes are more probable to receive larger volumes of traffic because they contain more hosts, we checked the correlation between prefix length and frequency. But (not shown here) we didn't find any evidence in support of this. \subsection{Prefix Inter-Reference Distance Distribution} \begin{figure}[t!] \centering \includegraphics[width=0.5\textwidth, keepaspectratio=true]{popularity.pdf} \caption{Destination Prefix Popularity} \label{fig:pop} \end{figure} We now check if knowledge about the popularity distribution suffices to accurately characterize the inter-reference distance distribution or if short-term correlations must also be taken into account. To achieve this, we use a methodology similar to the one used in~\cite{jin:web_tloc} for Web page traffic. We first generate random versions of our traces according to the IRM model, i.e., by considering only the popularity distribution and geometric inter-reference times, and then compare the resulting inter-reference distance distributions to the originals. Results are shown in Fig.~\ref{fig:ir_comparison}. We find that for all traces, popularity alone is able to account for the greater part of the inter-reference distance distribution, like in the case of Web requests. The only disagreement is in the region with distances lower than $100$ where short-term correlations are important and IRM traces underestimate the probability by a significant margin. A rather interesting finding is that the short-term correlations in all traces are such that the power-law behavior observed for higher distances ($t>100$) is extended up to distance $1$. In this region, the exact inter-reference distance equation (\ref{eq:ir_q}) is a poor fit to reality as it follows the IRM curve. However, the empirical results are apropriately described by our approximate inter-reference model (\ref{eq:fq_asym}) which avoids IRM's bent by assuming (\ref{eq:ir_q})'s numerator constant. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth,keepaspectratio=true]{inter_reference.pdf} \caption{Empirical and IRM generated inter-reference for the four traces} \label{fig:ir_comparison} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth, keepaspectratio=true]{popularity_fit.pdf} \caption{Frequency-rank distribution of destination prefixes and a linear least squares fit of the three power-law regiemes. $\alpha_i=1+1/s_i$, where $s_i$ is the slope of the $i$th segment.} \label{fig:pop_fit} \end{figure} \subsection{Cache Performance}\label{sec:cache_res} Having found that our assumptions regarding network traffic properties hold in our datasests we now if the cache model (see~(\ref{eq:msq}) and~(\ref{eq:msr})) is able predict real world LRU cache performance. As mentioned in Section~\ref{sec:pop_assumption} and as it may be seen in Fig.~\ref{fig:pop_fit}, the head of the popularity distribution exhibits two power-law regiemes instead of one. Then, two options arise, we can either use the model disregarding the discrepancies or adapt it to consider the low rank region behavior. For completness, we choose the latter in our evaluation. This only consists in approximating $p_\nu(\nu)$ (see (\ref{eq:gzipf_pdf})) as having three regions, each dominated by an exponent $\alpha_i$. Recomputing (\ref{eq:msr}) we get that the miss rate has three regions, each characterized by an $\alpha_i$. Choosing the first option would only result in an overestimation of cache miss rates for low cache sizes. To contrast the model with the empirical observations, we performed a linear least squares fit of the three regions of the popularity distribution. This allowed us to determine the exponents $\alpha_i$, computed as $1 + 1/s_i$ where $s_i$ is the slope of the $i$th segment, and to roughly approximate the frequencies $\nu_{k1}$ and $\nu_{k2}$ at which the segments intersect. Using them as input to (\ref{eq:msq}) we get a cache miss rate estimate as shown in Fig.~\ref{fig:mr_model}. Generally, we see that the model is a remarkably good fit for the large cache sizes but constantly underestimates the miss rate for sizes lower than 1000. This may be due to the poor fit of the popularity for low ranks. Nevertheless a more elaborate fitting of $\nu_{k1}$ and $\nu_{k2}$ should provide better results as it may be seen in Fig.~\ref{fig:mr_fit} where we performed a linear least squares fit of the three power law regions of the cache miss rate. Knowing that the slope of the cache miss rate is $s_i=1-1/(\alpha_i-1)$ (see (\ref{eq:mtq})), we computed the exponents as depicted in the figure. Comparison with those computed in Fig.~\ref{fig:pop_fit} shows they are very similar. Overall, we can conclude that the model accurately predicts cache performance. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth, keepaspectratio=true]{miss_rate_fit.pdf} \caption{Empirical miss rate with cache size and a linear least-squares fit of the exponent for the three power-law regions. Notice the similarity with the exponents of the three regions of the popularity distribution in Fig~\ref{fig:pop_fit}. } \label{fig:mr_fit} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth,keepaspectratio=true]{miss_rate_model.pdf} \caption{Empirical miss rate with cache size together with a fit by (\ref{eq:msq}) and (\ref{eq:msr})} \label{fig:mr_model} \end{figure} \section{Related Work}\label{sec:rw} Denning was first to recognize the phenomenon of temporal locality in his definition of the working-set~\cite{denning:ws_model} and together with Schwartz established the fundamental properties that characterize it~\cite{denning:ws_properties}. Although initially designed for the analysis of page caching in operating systems, the ideas were later reused in other fields including Web page and route caching. In~\cite{breslau:web_and_zipf} Breslau et al. argued that empirical evidence indicates that Web requests popularity distribution is Zipf-like of exponent $\alpha < 1$. Using this finding and the assumption that temporal locality is mainly induced through long-term popularity, they showed that the asymptotic miss rates of an LFU cache, as a function of the cache size, is a power law of exponent $1-\alpha$. In this paper we argue that GZipf with exponents greater than $1$ is a closer fit to real popularity distributions and obtain a more general LRU cache model. We further use the model to determine the scaling properties of the cache. Jin and Bestavros showed in~\cite{jin:web_tloc} that the inter-reference distribution is mainly determined by the the long-term popularity and only marginally by short-term correlations. They also proved that the inter-reference distribution of a reference string with Zipf-like popularity distribution is proportional to $1/t^{2-1/\alpha}$. We build upon their work but also extend their results by both considering a GZipf popularity distribution and by using them to deduce an LRU cache model. In the field of route caching, Feldmeier~\cite{feldmeier:rt_cache} and Jain~\cite{jain:dst_locality} were among the first to evaluate the possibility of performing destination address caching by leveraging the locality of traffic in network environments. Feldmeier found that locality could be exploited to reduce routing table lookup times on a gateway router while Jain, discovered that deterministic protocol behavior limits the benefits of locality for small caches. The works, though fundamental, bear no practical relevance today as they were carried two decades ago, a time when the Internet was still in its infancy. Recently, Kim et al.~\cite{kim:rcaching} performed a measurement study within the operational confinement of an ISP's network and showed the feasibility of route caching. They show by means of an experimental evaluation that LRU cache eviction policy performs close to optimal and better than LFU. Also, they found that prefix popularity distribution is very skewed and that working-set size is generally stable with time. These are in line with our empirical findings and provide practical confirmation for our assumption that the popularity distribution can be described as a GZipf. Several works have previously looked at cache performance in loc/id split scenarios considering LISP as a reference implementation. Iannone et al.~\cite{iannone:lcache} performed an initial trace driven study of the LISP map-cache performance while Kim et al.~\cite{jkim:lcache} have both extended and confirmed the previous results with the help of a larger, ISP trace. Zhang et al.~\cite{zhang:lcache} performed a trace based Loc/ID mapping cache performance analysis assuming a LRU eviction policy and using traffic captured at two egressing links of the China Education and Research Network backbone network. Although methodologies differ between the different papers, in all cases the observed LISP cache miss rates were found to be relatively small. This, again, indirectly confirms the skewness of the popularity distribution and its stability at least for short time scales. Finally, in~\cite{coras:lcache_n} we devised an analytical model for the LISP cache size starting from empirical average working-set curves, using the working-set theory. Our goal was to model the influence of locality on cache miss rates whereas here, we look to understand how cache performance scales with respect to defining parameters, that is, the popularity distribution, the size of the LISP site and the size of the EID space, of network traffic. \section{Conclusions}\label{sec:conclusions} LISP offers a viable solution to scaling the core routing infrastructure of the Internet by means of a location/identity split. However this forces edge domain routers to cache location to identity bindings for timely operations. In this paper we answer the following question: does the newly introduced LISP edge cache scale? Our findings show that the miss rate scales constantly O(1) with the number of users as well as with the number of destinations. For this, we start from two assumptions: (i) the popularity of destination prefixes is described by a GZipf distribution and (ii) temporal locality is predominantly determined by long-term popularity. Fundamentally, these assumptions are often observed to hold in the Internet~\cite{sarrar:leverage_zipf, kim:rcaching} but also in other fields such as web traffic~\cite{breslau:web_and_zipf}, on-demand video~\cite{cha:itube} or even linguistics~\cite{zipf:principle_least_effort}. Arguably, they are inherent to human nature and, as such, are expected to hold in the foreseeable future. Nevertheless, in the paper we also show that if the converse holds, then cache size scales linearly O(N) with the number of destinations. At the time of this writing there is an open debate on how the Internet should look like in the near future and in this context, it is important to analyze the scalability of the various future Internet architecture proposals. This paper fills this gap, particularly for the Locator/ID split architecture. Furthermore, our results show that edge networks willing to deploy LISP will not face scalability issues -as long as both assumptions hold- in the size of their map-cache, even if the edge network itself becomes larger (i.e., more users) or the Internet grows (i.e., more prefixes). \section*{Acknowledgements} The authors would like to express their gratitude to Damien Saucez and Chadi Barakat for their insightful comments. This work has been partially supported by the Spanish Ministry of Education under scholarship AP2009-3790, research project TEC2011-29700-C02, Catalan Government under project 2009SGR-1140 and a Cisco URP Grant. \small
{ "timestamp": "2015-04-15T02:03:17", "yymm": "1504", "arxiv_id": "1504.03004", "language": "en", "url": "https://arxiv.org/abs/1504.03004" }
\section{Introduction and statement of the main results} \subsection{Introduction} The study of the existence of invariant sets, particularly, periodic solutions is very important for understanding the dynamics of a differential system. One of the most important tools to detect such sets is the averaging theory. A classical introduction to this tool can be found in \cite{V,SVM}. \smallskip On the other hand the study of the discontinuous differential systems has it importance and motivation lying in some fields of the applied sciences. Many problems of physics, engineering, economics, and biology are modeled using differential equation with discontinuous right--hand side. For instance we may cite problems in control systems \cite{Bar}, impact and friction mechanics \cite{Br}, nonlinear oscillations \cite{AVK,M}, economics \cite{H,I}, and biology \cite{Ba,Kr}. Recent reviews appeared in \cite{physDspecial, ML}. \smallskip Despite to the importance of the discontinuous differential systems mentioned above, there still exist only a few analytical techniques to study the invariant sets of discontinuous differential systems. In \cite{LNT1} the averaging theory has been extended for the following class of discontinuous differential systems \begin{equation}\label{intro1} x'(t)= \begin{cases} \e F_1(t,x)+\e^2R_1(t,x,\e)\quad \mbox{if}\quad h(t,x)>0,\\ \e F_2(t,x)+\e^2R_2(t,x,\e)\quad \mbox{if}\quad h(t,x)<0. \end{cases} \end{equation} where $F_1,F_2,R_1,R_2$ and $h$ are continuous functions, locally Lipschitz in the variable $x$, $T$--periodic in the variable $t$, and $h$ is a $\CC^1$ function having $0$ as a regular value. The results stated in \cite{LNT1} have been extensively used, see for instance the works \cite{LM1,LM2,N,LLM,LZ}. \smallskip In this paper we focus on the development and improvement of the averaging theory for studying periodic solutions of a much bigger class of discontinuous differential systems than in \eqref{intro1}. Regarding to the averaging theory for finding periodic solutions there are essentially three main theorems. In what follows we describe these theorems. \smallskip The first one is concerning about the study of the periodic solutions of the periodic differential systems of the form \[ x'=\e F_1(t,x)+\e^2 F_2(t,x)+\cdots+\e^m F_m(t,x)+\e^{m+1} R(t,x,\e), \] with $x\in\R^d$. For continuous differential systems, even for the non--differentiable ones, this theory is already completely developed (see for instance \cite{V,SVM,BL,GGL,LNT2}), and for discontinuous differential systems this theory is develop up to order $2$ in $\e$ (see \cite{LNT1,LMN}). \smallskip The other two theorems go back to the works of Malkin \cite{Ma} and Roseau \cite{Ro}. They studied the periodic solutions of the periodic differential systems of the form \[ x'=F_0(t,x)+\e F_1(t,x)+\e^2 F_2(t,x)+\cdots+\e^m F_m(t,x)+\e^{m+1} R(t,x,\e), \] with $x\in\R^d$, distinguishing when the manifold $\CZ$ of all periodic solutions of the unperturbed system $x'=F_0(t,x)$ has dimension $d$ or smaller then $d$. These theories are well developed for continuous differential systems (see for instance \cite{BFL,BGL,GGL,LNT2}). Nevertheless there is no theory for studying such problems in discontinuous differential systems. Thus our main objective in this paper is to develop these last theorems for a big class of discontinuous differential systems. \smallskip In subsections \ref{Prel} and \ref{stat} we describe the class of discontinuous differential systems that we shall consider in this paper together with our main results, and we also do an application. In section \ref{s2} we prove our main results, and in section \ref{PP} we describe carefully the application of our results. \subsection{Preliminaries}\label{Prel} We take the ODE's \begin{equation}\label{A1} x'(t)=F^n(t,x), \quad (t,x)\in \s^1 \times D \quad \textrm{for} \quad n=1,2,\ldots,N, \end{equation} where $D\subset\R^d$ is an open subset and $\s^1=\R/T$ for some positive real number $T$. Here $F^n: \s^1\times D \to \R^d$ for $n=1,2,\ldots,N$ are continuous functions and the prime denotes derivative with respect to the time $t$. For $n=1,2,\ldots,N$ let $S_n$ be open connected and disjoint subsets of $\s^1\times D$. The boundary of $S_n$ for $n=1,2,\ldots,N$ is assumed to be piecewise $\CC^m$ embedded hypersurface with $m\geq1$ and the union of all these boundaries is denoted by $\Sigma$. Moreover we assume that $\Sigma$ and all $S_n$ together cover $\s^1\times D$. We call an {\it $N$--Discontinuous Piecewise Differential System}, or simply a {\it DPDS}, when the context is clear, the following differential system \begin{equation}\label{s1} x'(t)=\left\{\begin{array}{LC} F^1(t,x) & (t,x)\in \ov{S_1},\\ F^2(t,x) &(t,x)\in \ov{S_2},\\ &\vdots\\ F^N(t,x) & (t,x)\in \ov{S_N}. \end{array}\right. \end{equation} Here $\ov{S_n}$ denotes the closure of $S_n$ in $\s^1\times D$. \smallskip Instead of working with system \eqref{s1} we can work with the following associated system. \begin{equation}\label{ss1} x'(t)=F(t,x)=\sum_{n=1}^N \chi_{\ov {S_n}}(t,x) F^n(t,x), \quad (t,x)\in \s^1\times D, \end{equation} where for a given subset $A$ of $\s^1\times D$ the {\it characteristic function} $\chi_A(t,x)$ is defined as \[ \chi_A(t,x)= \begin{cases} 1 \quad\textrm{if}\quad (t,x)\in A,\\ 0 \quad\textrm{if}\quad (t,x)\notin A. \end{cases} \] Systems \eqref{s1} and \eqref{ss1} does not coincides in $\Sigma$. Indeed system \eqref{s1} is multivalued in $\Sigma$ whereas system \eqref{ss1} is single valued in $\Sigma$. Using Filippov's convention for the solutions of the systems \eqref{s1} or \eqref{ss1} (see \cite{Fi}) passing through a point $(t,x)\in\Sigma$ we have that these solutions do not depend on the value $F(t,x)$. So the solutions of systems \eqref{s1} and \eqref{ss1} are the same. \smallskip When $F^n$ for $n=1,2,\ldots,N$ are $\CC^1$ functions we define the ``derivative'' of the discontinuous piecewise differentiable function $F(t,x)$ with respect to $x$ as \begin{equation}\label{nota} D_x F(t,x)=\sum_{n=1}^N \chi_{\ov {S_n}}(t,x) D_x F^n(t,x). \end{equation} We note that when the function $F(t,x)$ is differentiable with respect to the variable $x$ then the above definition coincides with the usual derivative. \smallskip We say that a point $p\in\Sigma$ is a {\it generic point of discontinuity} if there exists a neighborhood $G_p$ of $p$ such that $\CS_p=G_p\cap\Sigma$ is a $\CC^m$ embedded hypersurface in $\s^1\times D$ with $m\geq 1$, such that the hypersurface $\CS_p$ splits $G_p\backslash\CS_p$ in two disconnected regions, namely $G_p^+$ and $G_p^-$, and the vector fields $F_p^+=F|_{G_p^+}$ and $F_p^-=F|_{G_p^-}$ are continuous. We define $l(p)$ as the segment connecting the vectors $F_p^+(p)$ and $F_p^-(p)$ when these have the same origin (see Figures \ref{fig1} and \ref{fig2}). \smallskip Let $\CS\subset \Sigma$ be an embedded hypersurface in $\s^1\times D$ and $T_p \CS$ denotes the tangent space of $\CS$ at the point $p$. In what follows we define the {\it crossing region} $\Sigma^c(\CS)$ (see Figure \ref{fig1}), and the {\it sliding region} $\Sigma^s(\CS)$ (see Figure \ref{fig2}) of the hypersurface$\CS$. \[ \begin{array}{CCC} \Sigma^c(\CS)=\left\{p\in \CS:\, l(p)\cap T_p \CS=\emptyset \right\} &\textrm{and}& \Sigma^s(\CS)=\left\{p\in \CS:\, l(p)\cap T_p \CS\neq\emptyset \right\}. \end{array} \] These definitions only make sense when the linear space $T_p\CS$ is based at the origin of the vectors $F^+_p(p)$ and $F^-_p(p)$. \smallskip The hypersurface $\CS\subset\Sigma$ can be decomposed as the union of the closure of its {\it crossing region} with its {\it sliding region}. \smallskip When the hypersurface $\CS\subset\Sigma$ is given by $\CS=h^{-1}(0)$ for some $\CC^1$ function $h:\s^1\times D\rightarrow\R$ having $0$ as a regular value, then the above definitions becomes \[ \begin{array}{L} \Sigma^c(\CS)=\left\{p\in \CS: \langle \nabla h(p),(1,F^+(p))\rangle\langle \nabla h(p),(1,F^-(p))\rangle>0 \right\}\quad \textrm{and}\\ \Sigma^s(\CS)=\left\{p\in \CS:\, \langle \nabla h(p),(1,F^+(p))\rangle\langle \nabla h(p),(1,F^-(p))\rangle<0 \right\}. \end{array} \] \begin{figure}[h] \centering \psfrag{T}{$T_p\Sigma$} \psfrag{M}{$\CS_p$} \psfrag{p}{$p$} \psfrag{vj}{$F_p^-(p)$} \psfrag{vi}{$F_p^+(p)$} \psfrag{l}{$l(p)$} \psfrag{Aj}{$G_p^-$} \psfrag{Ai}{$G_p^+$} \includegraphics[width=14cm]{Crossing.eps} \vskip 0cm \centerline{} \caption{\small \label{fig1} Crossing region of $\CS$: $\Sigma^c\CS$. } \end{figure} \begin{figure}[h] \centering \psfrag{T}{$T_p\Sigma$} \psfrag{M}{$\CS_p$} \psfrag{p}{$p$} \psfrag{vj}{$F_p^-(p)$} \psfrag{vi}{$F_p^+(p)$} \psfrag{l}{$l(p)$} \psfrag{Aj}{$G_p^-$} \psfrag{Ai}{$G_p^+$} \includegraphics[width=14cm]{Sliding.eps} \vskip 0cm \centerline{} \caption{\small \label{fig2} Sliding region of $\CS$: $\Sigma^s\CS$. } \end{figure} \smallskip Globally we define the {\it crossing region} $\Sigma^c$ as the generic points of discontinuity $p$ such that $p\in\Sigma^c(\CS_p)$. The {\it sliding region} $\Sigma^s$ is defined analogously. Later on this paper for a point $p\in \Sigma^c$ we shall denote $T_p\Sigma=T_p\CS_p$. \smallskip Let $\f_{F^n}(t,q)$ be the solution of system \eqref{A1} passing through the point $q\in S_n$ at time $t=0$, i.e. $\f_{F^n}(0,q)= q$. The local solution $\f_{F}(t,q)$ of system \eqref{ss1} passing through a point $p\in\Sigma^c$ at time $t=0$ is given by the Filippov convention, i.e. for $p\in\Sigma^c$ such that $l(p)\subset G_p^+$ and taking the origin of time at $p$, the trajectory through $p$ is defined as $\f_F(t,p)=\f_{F_p^-}(t,p)$ for $t\in I_p\cap\{t<0\}$, and $\f_F(t,p)=\f_{F_p^+}(t,p)$ for $t\in I_p\cap\{t>0\}$. Here $I_p$ is an open interval having the $0$ in its interior. For the case $l(p)\subset G_p^-$ the definition is the same reversing the time. \smallskip Assuming that the functions $F^n(t,x)$ are Lipschitz in the variable $x$ for $n=1,2,\ldots,N$, the results on Filippov systems (see \cite{Fi}) guarantee the uniqueness of the solutions reaching the set of discontinuity only at points of $\Sigma^c$. \subsection{Statements of the main results}\label{stat} Let $D$ be an open subset of $\R^d$ and for $n=1,2,\ldots,N$ let $F_0^n:\s^1\times D\rightarrow\R^d$ be a $\C^m$ function with $m\geq1$, and $F_1^n:\s^1\times D\rightarrow\R^d$, and $R^n:\s^1\times D\times[0,1]\rightarrow\R^d$ be continuous functions which are Lipschitz in the second variable. All these functions can be seen as $T$--periodic functions in the variable $t$ when $t\in\R$. Later on in this paper we shall assume more conditions under these functions. \smallskip Now taking \[ \begin{aligned} &F_i(t,x)=\sum_{n=1}^N \chi_{\ov {S_n}}(t,x) F_i^n(t,x), \quad \textrm{for $i=0,1$, and}\\ &R(t,x,\e)=\sum_{n=1}^N \chi_{\ov {S_n}}(t,x) R^n(t,x), \end{aligned} \] we consider the following DPDS, \begin{equation}\label{MRs1} x'(t)=F_0(t,x)+\e F_1(t,x)+\e^2R(t,x,\e). \end{equation} The parameter $\e$ is assumed to be small. We recall that $\Sigma$ denotes the union of the boundaries of $S_n$ for $n=1,2,\ldots,N$. \smallskip In order to present our main results we have to introduce more definitions and notation. \smallskip For $z\in D$ and $\e>0$ sufficiently small we denote by $x(\cdot,z,\e):[0,t_{(z,\e)})\rightarrow \R^d$ the solution of system \eqref{MRs1} such that $x(0,z,\e)=z$. Given a subset $B$ of $D$ we define $\widetilde{B}^{\e}=\ov{\{(t,x(t,z,\e)):\,z\in B, t\in [0,t_{(z,\e)})\}}$. \smallskip We denote by $\Sigma_0$ the set of points $x\in D$ such that the function $F(0,x)$ is discontinuous, clearly $\{0\}\times\Sigma_0\subset\Sigma$. \smallskip One of the main hypothesis of this paper is that the unperturbed system \begin{equation}\label{ups} x'(t)=F_0(t,x), \end{equation} has a manifold $\CZ$ embedded in $D\backslash \p \Sigma_0$ such that the solutions starting in $\CZ$ are all $T$--periodic functions and reach the set of discontinuity $\Sigma$ only at its crossing region $\Sigma^c$. Here $\p \Sigma_0$ denotes the boundary of $\Sigma_0$ with respect to topology of $D$. Precisely, \begin{itemize} \item[($H$)] let $\CZ=\{z_{\al}=(\al,\be_0(\al)):\,\al\in\ov V\}$, where $V$ is an open and bounded subset of $\R^k$, and $\beta_0:\ov V\rightarrow\R^{d-k}$ is a $\C^m$ function with $m\geq 1$. We shall assume that $\CZ\subset D$, $\CZ\cap \p\Sigma_0=\emptyset$, $\widetilde{\CZ}^0\cap\Sigma\subset\Sigma^c$ and for each $z_{\al}\in \CZ$ the unique solution $x_{\al}(t)=x(t,z_{\al},0)$ is $T$--periodic. \end{itemize} \begin{remark} Suppose that the solution $x_{\al}(t)$ reaches the set $\Sigma^c$ $\kappa_{\al}$ times. The assumption $\CZ\cap \p\Sigma_0=\emptyset$ in hypothesis $(H)$ implies that for each $z_{\al}\in \CZ$ there exists a small neighborhood $U_{\al}\subset D$ of $z_{\al}$ such that for $\e>0$ sufficiently small every solution of the perturbed system \eqref{MRs1} starting in $U_{\al}$ reach the crossing region of the set of discontinuity $\Sigma^c$ also $\kappa_{\al}$ times. This fact will be well justified in the proofs of Lemmas \ref{l2} and \ref{l3} in section \ref{s2} \end{remark} For $z\in D$ we take the following discontinuous piecewise linear differential system \begin{equation}\label{lin} y'=D_xF_0(t,x(t,z,0))\,y, \end{equation} which can be seen as the linearization of the unperturbed system \eqref{ups} along the solution $x(t,z,0)$. We note that for each $z\in D$ the function $t\mapsto D_xF_0(t,x(t,z,0))$ is piecewise $\C^m$ with $m\geq 1$, so we can consider a fundamental matrix $Y(t,z)$ of the differential system \eqref{lin}. Clearly $t\mapsto Y(t,z)$ is continuous piecewise $\C^m$ function. We define \begin{equation}\label{y1} y_1(t,z)=Y(t,z)\int_0^t Y(s,z)^{-1}F_1(s,x(s,z,0))ds. \end{equation} Now for $z_{\al}\in \CZ$ we denote $Y_{\al}(t)=Y(t,z_{\al})$. Let $\pi:\R^k\times \R^{d-k}\rightarrow \R^k$ and $\pi^{\perp}:\R^k\times \R^{d-k}\rightarrow \R^{d-k}$ be the projections onto the first $k$ coordinates and onto the last $d-k$ coordinates, respectively. Thus we define the averaged function $f_1: \ov V\rightarrow\R^k$ as \begin{equation}\label{f1} f_1(\al)=\pi y_1(T,z_{\al}). \end{equation} In what follows $\dis(x,A)$ denotes the Hausdorff distance function between a point $x\in D$ and a set $A\subset D$, and as usual the function $d_B(f_1,W,0)$ denotes the Brouwer degree (see for instance \cite{B} for details on the Brouwer degree). Our main result on the periodic solutions of DPDS \eqref{MRs1} is the following. \begin{mtheorem}\label{MRt1} In addition to the hypothesis $(H)$ we assume that \begin{itemize} \item[$(H1)$] for $n=1,2,\ldots,N$, the functions $F_0^n$ and $\beta_0$ are of class $\CC^1$; the continuous functions $D_x F_0^n$, $F_1^n$ and $R$ are locally Lipschitz with respect to $x$; and the boundary of $S_n$ are piecewise $\CC^1$ embedded hypersurface in $\R\times D$; \item[$(H2)$] for any $\al\in\ov V$ there exists a fundamental matrix solution $Y(t,z)$ of \eqref{lin} such that the matrix $Y_{\al}(T)Y_{\al}(0)^{-1}-Id$ has in the upper right corner the null $k\times(d-k)$ matrix, and in the lower right corner has the $(n-k)\times(n-k)$ matrix $\Delta_{\al}$ with $\det(\Delta_{\al})\neq0$; \item[$(H3)$] for an open subset $U$ of $D$ such that $\CZ\subset U$ we have that $(0,y_1(s,z))\in T_{(s,x(s,z,0))}\Sigma$ whenever $(s,x(s,z,0))\in\Sigma^c$ for $(s,z)\in\s^1\times U$; \item[$(H4)$] there exists $W$ open subset of $V$ such that $f_1(\al)\neq 0$ for $\al\in\p W$ and $d_B(f_1,W,0)\neq 0$. \end{itemize} Then for $\e>0$ sufficiently small, there exists a $T$--periodic solution $\f(t,\e)$ of system \eqref{MRs1} such that $\dis(\f(0,\e),\CZ)\to 0$ as $\e\to 0$. \end{mtheorem} Theorem \ref{MRt1} is proved in Section \ref{s2}. \begin{remark}\label{bc} When $f_1$ is a $\C^1$ function the assumption \begin{itemize} \item[(h4)] there exists $a\in V$ such that $f_1(a)=0$ and $\det(f_1'(a))\neq0$, \end{itemize} is a sufficient condition to guarantees the validity of the hypothesis $(H4)$. \end{remark} \begin{mtheorem}\label{MRt2} We suppose that the hypotheses $(H)$, $(H2)$ and $(H3)$ of Theorem \ref{MRt1} hold. If we assume that \begin{itemize} \item[$(h1)$] for $n=1,2,\ldots,N$, $F_0^n$, $D_x F_0^n$, $F_1^n$, $R^n$, and $\beta_0$ are $\CC^2$ functions and the boundary of $S_n$ are piecewise $\CC^2$ embedded hypersurface in $\R\times D$, \end{itemize} then $f_1(\al)$ is a $\CC^1$ function for every $\al\in\ov V$. Moreover, if we assume in addition that hypothesis $(h4)$ holds, then for $\e>0$ sufficiently small, there exists a $T$--periodic solution $\f(t,\e)$ of system \eqref{MRs1} such that $\f(0,\e)\to z_a$ as $\e\to 0$. \end{mtheorem} \smallskip In what follows we provide an application of Theorems \ref{MRt1} and \ref{MRt2}. We study the existence of limit cycles which bifurcate from the periodic solutions of the linear differential system $(\dot{u},\dot{v},\dot{w})=(-v,u,w)$ perturbed inside the class of all discontinuous piecewise linear differential systems with two zones separated by the plane $\Sigma=\{v=0\}\subset\R^3$, i.e. \begin{equation}\label{lep} \left(\begin{array}{C} \dot{u}\\ \dot{v}\\ \dot{w}\\ \end{array}\right)=\left\{\begin{array}{L} \left(\begin{array}{L} -v+\e(a_1^++b_1^+ u+c_1^+ v+d_1^+ w)\\ u+\e(a_2^++b_2^+ u+c_2^+ v+d_2^+ w)\\ w+\e(a_3^++b_3^+ u+c_3^+ v+d_3^+ w) \end{array}\right) \quad \textrm{if} \quad v>0,\vspace{0.2cm}\\ \left(\begin{array}{L} -v+\e(a_1^-+b_1^- u+c_1^- v+d_1^- w)\\ u+\e(a_2^-+b_2^- u+c_2^- v+d_2^- w)\\ w+\e(a_3^-+b_3^- u+c_3^- v+d_3^- w) \end{array}\right) \quad \textrm{if} \quad v<0. \end{array}\right. \end{equation} Our result on the existence of a limit cycle of system \eqref{lep} is the following. \begin{proposition}\label{p1} If $(a_2^--a_2^+)(b_1^- +b_1^+ +c_2^- +c_2^-)>0$, then for $|\e|>0$ sufficiently small there exists a periodic solution $(u(t,\e),v(t,\e),w(t,\e))$ of system \eqref{lep} such that $w(0,\e)\to 0$ when $\e\to 0$. Moreover, we can find $(u^*,v^*)\in\R^2$ such that \[ ||(u^*,v^*)||=\dfrac{4(a_2^- - a_2^+)}{\pi(b_1^-+b_1^++c_2^-+c_2^+)}, \] and $(u(0,\e),v(0,\e))\to(u^*,v^*)$ when $\e\to 0$. \end{proposition} Proposition \ref{p1} is proved in Section \ref{PP}. \section{Proof of Theorem \ref{MRt1}}\label{s2} Before proving our main result we state some preliminary lemmas. Given a function $\xi:[0,1]\rightarrow\R^d$ we say that $\xi(\e)=\CO(\e^{\ell})$ for some positive integer $\ell$ if there exists constants $\e_1>0$ and $k>0$ such that $||\xi(\e) ||\leq k|\e^{\ell}|$ for $0\leq\e\leq\e_1$, and that $\xi(\e)=o(\e^{\ell})$ for some positive integer $\ell$ if \[ \lim_{\e\to 0}\dfrac{||\xi(\e)||}{\e^{\ell}}=0. \] Here $||\cdot||$ denotes the usual Euclidean norm of $\R^d$. The symbols $\CO$ and $o$ are called the {\it Landau's symbols} (see for instance \cite{SVM}). \begin{lemma}\label{l1} Under the hypotheses $(H)$, $(H1)$, and $(H3)$ of Theorem \ref{MRt1} there exist an open and bounded subset $C$ of $U\backslash\p\Sigma_0$, a compact subset $Z\subset C$ with $\CZ\subset Z^{\circ}$, and a small parameter $\e_0>0$ such that $t_{(z,\e)}>T$ for every $z\in C$ and $\e\in[0,\e_0]$. Moreover $x(t,z,\e)=x(t,z,0)+\e y_1(t,z)+o(\e)$ for every $z \in Z$, $\e\in[0,\e_0]$, and $t\in[0,T]$. Here $Z^{\circ}$ denotes the interior of the set $Z$ with respect to the topology of $D$, and the function $y_1$ is given in \eqref{y1}. \end{lemma} \begin{proof} We note that $\CZ$ is a compact subset of $D$ and $\p\Sigma_0$ is a closed subset of $D$, such that, from the hypothesis $(H)$, $\CZ\cap\p\Sigma_0=\emptyset$. So there exists an open subset $A$ of $D$ such that $\CZ\subset A$ and $\ov A\cap \p\Sigma_0=\emptyset$. \smallskip Also from hypothesis $(H)$ we have that for $\al\in \ov V$ the continuous function $x_{\al}(t)$ reaches the set $\Sigma$ only at points of $\Sigma^c$. Since this function is $T$--periodic we can find a finite sequence $(t^i_{\al})$ for $i=0,1,\ldots,\kappa_{\al}$ with $t_{\al}^0=0$ and $t_{\al}^{\kappa_{\al}}=T$ such that \[ x_{\al}(t)= \begin{cases} \begin{array}{CCRL} x^1_{\al}(t) &\textrm{if}&0=&t^0_{\al}\leq t\leq t^1_{\al},\\ x^2_{\al}(t) &\textrm{if}&&t^1_{\al}\leq t\leq t_{\al}^2,\\ \vdots\\ x^i_{\al}(t) &\textrm{if}&&t^{i-1}_{\al}\leq t\leq t^i_{\al},\\ \vdots\\ x^{\kappa_{\al}}_{\al}(t) &\textrm{if}&&t^{\kappa_{\al}-1}_{\al}\leq t\leq t_{\al}^{\kappa_{\al}}=T,\\ \end{array} \end{cases} \] where each curve $t\mapsto x_{\al}^i(t)=x^i(t,z_{\al},0)$ reaches the set $\Sigma^c$ only at $t=t^{i-1}_{\al}$ and $t=t^{i}_{\al}$ for $i=2,3,\ldots,\kappa_{\al}-1$, the curve $x_{\al}^1$ reaches the set $\Sigma^c$ only at $t=0$ and $t=t^{1}_{\al}$ if $(0,z_{\al})\in\Sigma$, and only at $t=t^{1}_{\al}$ if $(0,z_{\al})\notin\Sigma$, the curve $x_{\al}^{\kappa_{\al}}$ reaches the set $\Sigma^c$ only at $t=t^{\kappa_{\al}-1}_{\al}$ and $t=T$ if $(T,x(T,z_{\al},0))\in\Sigma$, and only at $t=t^{\kappa_{\al}-1}_{\al}$ if $(T,x(T,z_{\al},0))\notin\Sigma$. From the definition of the crossing region $\Sigma^c$ these intersections are transversely. \smallskip Since $x_{\al}^i$ for $i=1,2,\ldots,\kappa_{\al}$ are solutions of Lipschitz differential equations we can use the results of continuous dependence of the solutions on initial conditions and parameters to ensure the existence of a small parameter $\e_{\al}$ and a small neighborhood $C_{\al}\subset A\cap U$ of $z_{\al}$ such that $\widetilde{C_{\al}}^{\e}\cap\Sigma\subset\Sigma^c$ for every $\e\in[0,\e_{\al}]$. From the compactness of $\CZ$ we can choose $\e_1$ as a minimum element of $\e_{\al}\in\ov V$. Now taking $C=\cup_{\al\in\ov V} C^{\al}$ it follows that $\widetilde{C}^{\e}\cap\Sigma\subset\Sigma^c$ for every $\e\in[0,\e_1]$. Moreover, we can take $\e_1>0$ and $C$ smaller in order that the function $t\mapsto x(t,z,\e)$ is defined for all $(t,z,\e)\in \s^1\times \ov C\times[0,\e_1]$. This is again a simple consequence of the Theorem of continuous dependence on initial conditions and parameters. \smallskip \smallskip Thus for $z\in \ov C$ and $\e\in[0,\e_1]$ the function $t\mapsto x(t,z,\e)$ is continuous and piecewise $\C^1$. So we can find a finite sequence $(t^i(z,\e))$ for $i=0,1,\ldots\kappa_z$ with $t^1(z,\e)=0$ and $t^{\kappa_{z}}(z,\e)=T$ such that \begin{equation}\label{xx} x(t,z,\e)= \begin{cases} \begin{array}{CCRL} x^1(t,z,\e) &\textrm{if}&0=&t^0(z,\e)\leq t\leq t^1(z,\e),\\ x^2(t,z,\e) &\textrm{if}&&t^1(z,\e)\leq t\leq t^2(z,\e),\\ \vdots\\ x^i(t,z,\e) &\textrm{if}&&t^{i-1}(z,\e)\leq t\leq t^i(z,\e),\\ \vdots\\ x^{\kappa_z}(t,z,\e) &\textrm{if}&&t^{\kappa_z-1}(z,\e)\leq t\leq t^{\kappa_z}(z,\e)=T,\\ \end{array} \end{cases} \end{equation} for which we have the following recurrence \begin{equation}\label{rec} \begin{array}{CCC} x^1(0,z,\e)=z &\textrm{and}& x^{i}(t^{i-1}(z,\e),z,\e)=x^{i-1}(t^{i-1}(z,\e),z,\e), \end{array} \end{equation} for $i=2,3,\ldots,\kappa_z$. The crossing region $\Sigma^c$ is an open subset of $\Sigma$, so for each $z\in \ov C$ we can find $0<\e_z\leq \e_1$ sufficiently small such that the number $\kappa_z$ of intersections between the curve $t\mapsto x(t,z,\e)$ with the set $\Sigma^c$ for $0\leq t\leq T$ and for $\e\in[0,\e_z]$ does not depend of $\e$. Since $\ov C$ is compact we can find $\e_2$ a minimum element of the $\e_z$'s for $z\in\ov C$ such that the above statement holds for every $z\in\ov C$ and $\e\in[0,\e_2]$. \smallskip Here again for every $z\in\ov C$ and $\e\in[0,\e_2]$ each curve $t\mapsto x^i(t,z,\e)$ reaches the set $\Sigma^c$ only at $t=t^{i-1}(z,\e)$ and $t=t^{i}(z,\e)$ for $i=2,3,\ldots,\kappa_{z}-1$, the curve $x^1(t,z,\e)$ reaches the set $\Sigma^c$ only at $t=0$ and $t=t^{1}(z,\e)$ if $(0,z)\in\Sigma$, and only at $t=t^{1}(z,\e)$ if $(0,z)\notin\Sigma$, the curve $x^{\kappa_{z}}(t,z,\e)$ reaches the set $\Sigma^c$ only at $t=t^{\kappa_{z}-1}(z,\e)$ and $t=T$ if $(T,x(T,z,0))\in\Sigma$, and only at $t=t^{\kappa_{z}-1}(z,\e)$ if $(T,x(T,z,0))\notin\Sigma$ \smallskip The functions $t\mapsto x^i(t,z,\e)$ for $i=1,2,\ldots,\kappa_z$ are $\C^1$ and satisfy the DPDS \eqref{MRs1}, so there exists a subsequence $(n_i)$ for $i=1,\ldots,\kappa_z$ with $n_i\in\{1,2,\ldots,N\}$ such that \begin{equation}\label{xi} \dfrac{\p}{\p t}x^i(t,z,\e)=F_0^{n_i}(t,x^i(t,z,\e))+\e F_1^{n_i}(t,x^i(t,z,\e))+\e^2R^{n_i}(t,x^i(t,z,\e),\e). \end{equation} Therefore the function $x^i(t,z,\e)$ is the solution of the {\it Cauchy Problem} defined by the differential system $\eqref{xi}$ together with the corresponding initial condition given in \eqref{rec}. Moreover $x^i(t,z_{\al},0)=x^i_{\al}(t)$ and $t^i(z_{\al},0)=t_{\al}^i$ for $i=1,2,\ldots,\kappa_z$. \smallskip From the continuity of the function $x(t,z,\e)$ we can choose a compact subset $K$ of $D$ such that $x(t,z,\e)\in K$ for all $(t,z,\e)\in\s^1\times \ov{C}\times[0,\e_2]$. From the continuity of the functions $F_i^n$ and $R^n$ for $i=0,1$ and $n=1,2,\ldots,N$ we have that these functions are bounded on the compact set $\s^1\times K\times[0,\e_2]$, so let $M$ be an upper bound for all these functions. Let $L$ be being the maximum Lipschitz constant of the functions $F_i^n$, $DF_0^n$, and $R^n$ for $i=0,1$ and $n=1,2,\ldots,N$ on the compact set $\s^1\times K\times[0,\e_2]$. \smallskip We compute \[ \left|\left|\int_0^t R(s,x(s,z,\e),\e)ds\right|\right|\leq\int_0^T||R(s,x(s,z,\e),\e) ||ds=TM, \] which implies that $\dint_0^t R(s,x(s,z,\e),\e)ds=\CO(1)$ in the parameter $\e$. \smallskip For $z\in\ov C$ and $t\in(0,T)$ we can find $\ov{\kappa}\in\{1,2,\ldots,\kappa_z-1\}$ such that $t\in[t^{\ov{\kappa}-1}(z,\e),t^{\ov{\kappa}}(z,\e))$ and \[ \begin{array}{RL} x(t,z,\e)=&x^{\ov{\kappa}}(t,z,\e)\vspace{0.3mm}\\ =&x^{\ov{\kappa}-1}(t^{\ov{\kappa}-1}(z,\e),z,\e)+\int_{ t^{\ov{\kappa}-1}(z,\e)}^t F_0(s,x(s,z,\e))ds\vspace{0.2cm}\\ &+\e\int_{t^{\ov{\kappa}-1}(z,\e)}^t F_1(s,x(s,z,\e))ds+\CO(\e^{2}). \end{array} \] Since \[ \begin{array}{RL} x^{i}(t^{i}(z,\e),z,\e)=&x^{i-1}(t^{i-1}(z,\e),z,\e)+\int_{t^{i-1}(z,\e)}^ {t^{i}(z,\e)} F_0(t,x(t,z,\e))dt\vspace{0.2cm}\\ &+\e \int_{t^{i-1}(z,\e)}^ {t^{i}(z,\e)} F_1(t,x(t,z,\e))dt+\CO(\e^2), \end{array} \] for $i=1,2,\ldots,\kappa_z$, we obtain, proceeding by induction on $i$, that \begin{equation}\label{x} x(t,z,\e)=z+\int_0^tF_0(s,x(s,z,\e))ds+\e\int_0^tF_1(s,x(s,z,\e))ds+\CO(\e^2). \end{equation} From here the proof of the lemma follows by proving several claims. \begin{claim}\label{c1} There exists a small parameter $\e_0>0$ such that for any $z\in Z$ and for $i=0,1,2,\ldots,\kappa_z$ the function $t^i(z,\e)$ is of class $\C^1$ for every $z$ in a neighborhood $U_z\subset C$ of $z$ and for $\e\in[0,\e_0]$, and $(\p\, t^i/\p\e)(z,0)=0$. Moreover for $t^{i-i}(z,0)\leq t\leq t^i(z,0)$ we have that $y_1(t,z)=(\p\,x^i/\p\e)(t,z,0)$ for $i=1,2,\ldots,\kappa_{z}$. \end{claim} First of all we note that $t^1(z,\e)=0$ and $t^{\kappa_z}(z,\e)=T$. So the first part of Claim \ref{c1} is clearly true for $i=0$ and $i=\kappa_z$. \smallskip We have concluded above that for each $z\in Z$ the curve $t\mapsto x(t,z,0)$ reaches the discontinuity set only at points of $\Sigma^c$. Let $z^i=x^i(t^i(z,0),z,0)$ and $p^i_z=(t^i(z,0),z^i)\in \Sigma^c$, then $p^i_z\in\Sigma^c$ for every $i=1,2,\ldots,\kappa_{z}$ if $(0,x(T,z,0))\in\Sigma$, and for every $i=1,2,\ldots,\ov{\kappa}_{z}-1$ if $(0,x(T,z,0))\notin\Sigma$. Particularly $p_i$ is a generic point of $\Sigma$, so there exists a neighborhood $G_{p^i_z}$ of $p^i_z$ such that $\CS_{p^i_z}=G_{p^i_z}\cap\Sigma$ is a $\CC^m$ embedded hypersurface of $\s^1\times D$ with $m\geq1$. It is well known that $\CS_{p^i_z}$ can be locally described as the inverse image of a regular value of a $\CC^m$ function. Thus there exists a small neighborhood $\breve G_{p^i_z}$ of $p^i_z$ with $\breve G_{p^i_z}\subset G_{p^i_z}$ and a $\CC^m$ function $h_i:\breve G_{p^i_z}\rightarrow \R $ such that $\breve G_{p^i_z}\cap\CS_{p^i_z} =h_i^{-1}(0)\cap\Sigma$. \smallskip For $(t,x)\in \breve G_{p^i_z}$ system \eqref{MRs1} can be written as the autonomous system \[ \begin{pmatrix}\tau'\\x'\end{pmatrix}= \begin{cases} X(\tau,x,\e)\quad \textrm{if}\quad h_i(\tau,x)>0,\vspace{0.2cm}\\ Y(\tau,x,\e)\quad \textrm{if}\quad h_i(\tau,x)<0, \end{cases} \] where \[ \begin{array}{C} X(\tau,x,\e)=\left(\begin{array}{C}1 \\ F_0^{n_{i+1}}(\tau,x) \e F_1^{n_{i+1}}(\tau,x)+ \e^2R^{n_{i+1}}(\tau,x,\e)\end{array} \right),\vspace{0.2mm}\\ Y(\tau,x,\e)=\left(\begin{array}{C}1 \\ F_0^{n_i}(\tau,x)+\e F_1^{n_i}(\tau,x)+\e^2R^{n_i}(\tau,x,\e)\end{array}\right). \end{array} \] From the definition of crossing region we also have $X h_i(p^i_z,0) Y h_i(p^i_z,0)>0$, then \begin{equation}\label{Yh} \begin{array}{RL} 0\neq Y h_i(p^i_z,0)&=\left\langle \left(\dfrac{\partial h_i}{\partial t}(p^i_z), \dfrac{\partial h_i}{\partial x}(p^i_z)\right),\left(1,F_0^{n_{i}}(p^i_z)\right)\right\rangle\vspace{0.2cm}\\ &=\dfrac{\partial h_i}{\partial t}(p^i_z)+\dfrac{\partial h_i}{\partial x}(p^i_z)F_0^{n_{i}}(p^i_z). \end{array} \end{equation} Now defining $H_i(t,\zeta,\e)= h_i(t,x^i(t,\zeta,\e))$ we have that $H_i(t^i(z,0),z,0)=0$. Since \[ \begin{array}{RL} \dfrac{\partial H_i}{\partial t}(t^i(z,0),z,0)=&\left.\dfrac{\partial} {\partial t}h_i(t,x^i(t,\zeta,\e))\right|_{(t,\zeta,\e)=(t^i(z,0),z,0)}\vspace{0.3cm}\\ =& \dfrac{\partial h_i}{\partial t}(t^i(z,0),x^i(t^i(z,0),z,0))\vspace{0.2cm}\\ &+\dfrac{\partial h_i} {\partial x}(t^i(z,0),x^i(t^i(z,0),z,0))\dfrac{\partial x^i}{\partial t}(t^i(z,0),z,0)\vspace{0.3cm}\\ =& \dfrac{\partial h_i}{\partial t}(p^i_z)+\dfrac{\partial h_i} {\partial x}(p^i_z)\dfrac{\partial x^i}{\partial t}(t^i(z,0),z,0)\vspace{0.3cm}\\ =&\dfrac{\partial h_i}{\partial t}(p^i_z)+\dfrac{\partial h_i} {\partial x}(p^i_z)F_0^{n_{i}}(p^i_z)\vspace{0.3cm}\\ =&Y h_i(p^i_z,0)\neq0, \end{array} \] from the Implicit Function Theorem we conclude that there exist a small neighborhood $U_z\subset C$ of $z$ and a small parameter $\tilde{\e}_z>0$ such that $t^i(\zeta,\e)$ is the unique $\CC^m$ function with $H(t^i(\zeta,\e),\e)=0$ for every $\zeta\in U_z$ and $\e\in[0,\tilde{\e}_z]$. So \begin{equation}\label{ti} t^i(\zeta,\e)=t^i(\zeta,0)+\e \dfrac{\p\,t^i}{\p\e}(\zeta,0)+o(\e) \end{equation} for every $i=1,2,\ldots,\ov{\kappa}_{z}-1$. Since $Z$ is compact we can take $\e_0$ as a minimum element of $\tilde{\e}_z$'s for $z\in Z$. \smallskip Now we shall use finite induction to conclude the proof of Claim \ref{c1}. We note that $h_i(t^i(z,\e), x^i(t^i(z,\e),z,\e))=0$ for $\e\in[0,\e_0]$, so \begin{equation}\label{ind1} \begin{array}{RL} 0=&\dfrac{\partial}{\partial \e}h(t^i(z,\e), x^i(t^i(z,\e),z,\e))\Big|_{\e=0}\vspace{0.3cm}\\ =&\dfrac{\partial h}{\partial\, t}(p^i_z)\dfrac{\p\,t^i}{\p\e}(z,0)+\dfrac{\partial h}{\partial z} (p^i_z)\left(\dfrac{\partial x^i}{\partial t}(t^i(z,0),z,0)\dfrac{\p\, t^i}{\p\e}(z,0)\right.\vspace{0.2cm}\\ &\left.+\dfrac{\partial x^i}{\partial \e}(t^i(z,0),z,0)\right)\vspace{0.3cm}\\ =&\dfrac{\partial h}{\partial\, t}(p^i_z)\dfrac{\p\,t^i}{\p\e}(z,0)+\dfrac{\partial h}{\partial z} (p^i_z)\left(F_0^{n_i}(p^i_z)\dfrac{\p\, t^i}{\p\e}(z,0)+\dfrac{\partial x^i}{\partial \e}(t^i(z,0),z,0)\right)\vspace{0.3cm}\\ =&\left\langle \nabla h(p^i_z),\Big(\dfrac{\p\,t^i}{\p\e}(z,0), F_0^{n_i}(p^i_z)\dfrac{\p\, t^i}{\p\e}(z,0)+\dfrac{\partial x^i}{\partial \e}(t^i(z,0),z,0)\Big)\right\rangle, \end{array} \end{equation} for $i=1,2,\ldots,\kappa_z$. \smallskip Taking $i=1$, from \eqref{xi} we obtain that \begin{equation}\label{ind2} \dfrac{d}{dt}\left(\dfrac{\p x^1}{\p \e}(t,z,0)\right)=DF_0^{n_1}(t,x^1(t,z,0))\left(\dfrac{\p x^1}{\p \e}(t,z,0)\right)+F_1^{n_1}(t,x^1(t,z,0)). \end{equation} So for $0\leq t\leq t^1(z,0)$ the differential system \eqref{ind2} becomes \begin{equation}\label{ind3} \dfrac{d}{dt}\left(\dfrac{\p x^1}{\p\e}(t,z,0)\right)=DF_0(t,x(t,z,0))\left(\dfrac{\p x^1}{\p\e}(t,z,0)\right)+F_1(t,x(t,z,0)). \end{equation} Since $\dfrac{\p x^1}{\p \e}(0,z,0)=0$ the solution of the linear differential system \eqref{ind3} is \begin{equation}\label{lab1} \dfrac{\p x^1}{\p\e}(t,z,0)=Y(t,z)\int_0^tY(s,z)^{-1} F_1(x(s,z,0))ds=y_1(t,z), \end{equation} for $0\leq t\leq t^1(z,0)$. Now from hypothesis $(H3)$ and from equality \eqref{ind1}, for $i=1$, we have that \begin{equation}\label{proc1} \Big(\la \dfrac{\p\,t^1}{\p\e}(z,0), \la F_0^{n_1}(p^1_z)\dfrac{\p\, t^1}{\p\e}(z,0)+y_1(t^1(z,0),z)\Big)\in T_{p^1_z}\Sigma \end{equation} for every $\la\in[0,1]$. Thus \begin{equation}\label{proc2} \begin{array}{RL} 0=&\left\langle \nabla h(p^1_z),\Big(\la \dfrac{\p\,t^1}{\p\e}(z,0), \la F_0^{n_1}(p^1_z)\dfrac{\p\, t^1}{\p\e}(z,0)+y_1(t^1(z,0),z)\Big)\right\rangle\vspace{0.2cm}\\ =&\la\left(\dfrac{\partial h}{\partial\, t}(p^1_z)\dfrac{\p\,t^1}{\p\e}(z,0)+\dfrac{\partial h}{\partial z} (p^1_z)F_0^{n_1}(p^1_z)\dfrac{\p\, t^1}{\p\e}(z,0)\right)+\dfrac{\partial h}{\partial z} (p^1_z)y_1(t^1(z,0),z)\vspace{0.2cm}\\ =&\la Yh_1(p^1_z,0)\dfrac{\p\,t^1}{\p\e}(z,0)+\dfrac{\partial h}{\partial z} (p^1_z)y_1(t^1(z,0),z), \end{array} \end{equation} for every $\la\in[0,1]$. Computing the derivative with respect to $\la$ in \eqref{proc2} it follows that $Yh_1(p^1_z,0)\dfrac{\p\,t^1}{\p\e}(z,0)=0$ . So from \eqref{Yh} we obtain that \begin{equation}\label{lab2} \dfrac{\p\,t^1}{\p\e}(z,0)=0. \end{equation} Hence from \eqref{lab1} and \eqref{lab2} the claim is proved for $i=1$. \smallskip Given a positive integer $\ell>1$, we assume by induction hypothesis that Claim \ref{c1} is true for $i=\ell-1$. Taking $i=\ell$ from \eqref{xi} we have that \begin{equation}\label{ind4} \dfrac{d}{dt}\left(\dfrac{\p x^{\ell}}{\p \e}(t,z,0)\right)=DF_0^{n_{\ell}}(t,x^{\ell}(t,z,0))\left(\dfrac{\p x^{\ell}}{\p \e}(t,z,0)\right)+F_1^{n_{\ell}}(t,x^{\ell}(t,z,0)). \end{equation} So for $t^{\ell-1}(z,0)\leq t\leq t^{\ell}(z,0)$ the differential system \eqref{ind4} becomes \begin{equation}\label{ind5} \dfrac{d}{dt}\left(\dfrac{\p x^{\ell}}{\p\e}(t,z,0)\right)=DF_0(t,x(t,z,0))\left(\dfrac{\p x^{\ell}}{\p\e}(t,z,0)\right)+F_1(t,x(t,z,0)). \end{equation} From \eqref{rec} we have that $x^{\ell}(t^{\ell-1}(z,\e),z,\e)=x^{\ell-1}(t^{\ell-1}(z,\e),z,\e)$ for every $\e\in[0,\e_0]$. Computing its derivative with respect to $\e$ at $\e=0$ we obtain that \begin{equation}\label{ine6} \begin{array}{L} \dfrac{\p x^{\ell}}{\p\,t}(t^{\ell-1}(z,0),z,0)\dfrac{\p\, t^{\ell-1}}{\p \e}(z,0)+ \dfrac{\p x^{\ell}}{\p \e}(t^{\ell-1}(z,0),z,0)=\vspace{0.2cm}\\ \dfrac{\p x^{\ell-1}}{\p t}(t^{\ell-1}(z,0),z,0)\dfrac{\p \, t^{\ell-1}}{\p \e}(z,0)+ \dfrac{\p x^{\ell-1}}{\p \e}(t^{\ell-1}(z,0),z,0). \end{array} \end{equation} So from induction hypothesis it follows that \begin{equation}\label{ine7} \dfrac{\p x^{\ell}}{\p \e}(t^{\ell-1}(z,0),z,0)=\dfrac{\p x^{\ell-1}}{\p \e}(t^{\ell-1}(z,0),z,0)=y_1(t^{\ell-1},z). \end{equation} We note that \eqref{ine7} is the initial condition for system \eqref{ind5}. Thus for $t^{\ell-1}(z,0)\leq t\leq t^{\ell}(z,0)$ regarding to the linear differential equation \eqref{ind5} we get that \begin{equation}\label{ine8} \dfrac{\p x^{\ell}}{\p\e}(t,z,0)=\widetilde Y(t,z)y_1(t^{\ell-1}(z,0),z)+\widetilde Y(t,z)\int_0^t\widetilde Y(s,z)^{-1} F_1(x(s,z,0))ds, \end{equation} where $\widetilde Y(t,z)$ is the fundamental matrix of the linear differential system \eqref{lin} such that $\widetilde Y(t^{\ell-1}(z,0),z)$ is the identity matrix. Clearly $\widetilde Y(t,z)=Y(t,z)Y(t^{\ell-1}$ $(z,0),z)^{-1}$. So substituting \eqref{y1} in \eqref{ine8} we get \begin{equation}\label{ine9} \begin{array}{RL} \dfrac{\p x^{\ell}}{\p\e}(t,z,0)=&Y(t,z)\int_0^{t^{\ell-1}(z,0)}Y(s,z)^{-1} F_1(x(s,z,0))ds\vspace{0.2cm}\\ &+Y(t,z)\int_{t^{\ell-1}(z,0)}^t Y(s,z)^{-1} F_1(x(s,z,0))ds\vspace{0.3cm}\\ =&Y(t,z)\int_{t^{\ell-1}(z,0)}^t Y(s,z)^{-1} F_1(x(s,z,0))ds=y_1(t,z), \end{array} \end{equation} for $t^{\ell-1}(z,0)\leq t\leq t^{\ell}(z,0)$. Now repeating the procedure of \eqref{proc1} and \eqref{proc2} for $i=\ell$ we conclude that $\dfrac{\p\,t^{\ell}}{\p\e}(z,0)=0$. So we have proved Claim \ref{c1}. \bigskip \begin{claim}\label{c2} The equality $x(t,z,\e)=x(t,z,0)+\CO(\e)$ holds for every $z\in Z$ and $\e\in[0,\e_0]$. \end{claim} If $t\in[t^{\ov{\kappa}-1}(z,\e),t^{\ov{\kappa}}(z,\e))$ then we compute \[ \begin{array}{RL} \int_0^t F_0(s,x(s,z,\e))ds =& \sum_{i=1}^{\ov{\kappa}-1}\left( \int_{t^{i-1}(z,\e)}^{t^{i}(z,\e)}F_0^{n_i}(s,x(s,z,\e))ds\right)\vspace{0.2cm}\\ &+\int_{t^{\ov{\kappa}-1}(z,\e)}^{t}F_0^{n_{\ov{\kappa}}}(s,x(s,z,\e))ds\vspace{0.3cm}\\ =& \sum_{i=1}^{\ov{\kappa}-1}\left( \int_{t^{i-1}(z,0)}^{t^{i}(z,0)}F_0^{n_i}(s,x(s,z,\e))ds\right)\vspace{0.2cm}\\ &+\int_{t^{\ov{\kappa}-1}(z,0)}^{t}F_0^{n_{\ov{\kappa}}}(s,x(s,z,\e))ds+E_0(\e),\\ \end{array} \] where \[ \begin{array}{RL} E_0(\e)=&\sum_{i=1}^{\ov{\kappa}-1}\left(\int_{t^{i-1}(z,\e)}^{t^{i-1}(z,0)} F_0^{n_i}(s,x(s,z,\e))ds-\int_{t^{i}(z,\e)}^{t^{i}(z,0)}F_0^{n_i} (s,x(s,z,\e))ds \right)\vspace{0.2cm}\\ &+ \int_{t^{\ov{\kappa}-1}(z,\e)}^{t^{\ov{\kappa}-1}(z,0)}F_0^{n_{\ov{\kappa}}} (s,x(s,z,\e))ds. \end{array} \] It is easy to see that there exists a constant $\ov E$ such that \begin{equation}\label{E} ||E_0(\e)||\leq \ov E \sum_{i=0}^{\ov{\kappa}-1}|t^i(z,0) - t^i(z,\e)|. \end{equation} Indeed the function $F_0^{n_i}(t,x)$ are bounded in the set $\s^1\times K$, so \[ \begin{array}{RL} \left|\left|\int_{t^{i}(z,\e)}^{t^{i}(z,0)}F_0^{n_i} (s,x(s,z,\e))ds\right|\right|&\leq\int_{t^{i}(z,\e)}^{t^{i}(z,0)}\left|\left|F_0^{n_i} (s,x(s,z,\e))\right|\right|ds\vspace{0.2cm}\\ &\leq L|t^i(z,0) - t^i(z,\e)|, \end{array} \] for $i=0,1,2,\ldots,\ov{\kappa}$. \smallskip From Claim \ref{c1} we conclude that $E_0(\e)=o(\e)$, particularly $E_0(\e)=\CO(\e)$. Thus \begin{equation}\label{FF0} \begin{array}{RL} \int_0^t F_0(s,x(s,z,\e))ds=&\sum_{i=1}^{\ov{\kappa}-1}\left( \int_{t^{i-1}(z,0)}^{t^{i}(z,0)}F_0^{n_i}(s,x(s,z,\e))ds\right)\vspace{0.2cm}\\ &+\int_{t^{\ov{\kappa}-1}(z,0)}^{t}F_0^{n_{\ov{\kappa}}}(s,x(s,z,\e))ds+\CO(\e). \end{array} \end{equation} Using the fact that the functions $F_0^{n_i}$ for $i=1,2,\ldots,\kappa_z$ are locally Lipschitz in the second variable together with \eqref{FF0} we obtain \[ \begin{array}{RL} &\left|\left|\int_0^t F_0(s,x(s,z,\e))-F_0(s,x(s,z,0))ds\right|\right|\vspace{0.2cm}\\ \leq&\sum_{i=1}^{\ov{\kappa}-1} \int_{t^{i-1}(z,0)}^{t^{i}(z,0)}\big|\big|F_0^{n_i}(s,x(s,z,\e))-F_0^{n_i}(s,x(s,z,0))\big|\big|ds\vspace{0.2cm}\\ &+\int_{t^{\ov{\kappa}-1}(z,0)}^{t}\big|\big|F_0^{n_{\ov{\kappa}}}(s,x (s,z,\e))-F_0^{n_{\ov{\kappa}}}(s,x(s,z,0))\big|\big|ds+\CO(\e)\vspace{0.2cm}\\ \leq& L\sum_{i=1}^{\ov{\kappa}-1} \int_{t^{i-1}(z,0)}^{t^{i}(z,0)}||x(s,z,\e)-x(s,z,0)||ds\vspace{0.2cm}\\ &+L\int_{t^{\ov{\kappa}-1}(z,0)}^{t}||x(s,z,\e)-x(s,z,0)||ds+\CO(\e)\vspace{0.2cm}\\ =&L\int_{0}^{t}||x(s,z,\e)-x(s,z,0)||ds + \CO(\e). \end{array} \] From \eqref{x} this implies that \begin{equation}\label{gron} \begin{array}{RL} ||x(t,z,\e)-x(t,z,0)||\leq&\int_0^t||F_0(s,x(s,z,\e))-F_0(s,x(s,z,0))||ds\\ &+|\e|\int_0^t||F_1(s,x(s,z,\e))||ds+\CO(\e^2)\vspace{0.2cm}\\ \leq&|\e|MT+L\int_{0}^{t}||x(s,z,\e)-x(s,z,0)||ds\vspace{0.2cm}\\ \leq&|\e|M T e^{TL}.\\ \end{array} \end{equation} The last inequality is a consequence of Gronwall Lemma (see, for example, Lemma 1.3.1 of \cite{SVM}). \smallskip From \eqref{gron} we conclude that $x(t,z,\e)=x(t,z,0)+\CO(\e)$. So we have proved Claim \ref{c2}. \bigskip \begin{claim}\label{c3} The equality $x(t,z,\e)=x(t,z,0)+\e y_1(t,z)+o(\e)$ holds for every $z\in Z$ and $\e\in[0,\e_0]$. \end{claim} In the proof of Lemma 1 of \cite{LNT2} it has been proved that \begin{equation}\label{F0F1} \begin{array}{RL} F_0^{n_i}(t,x^i(t,z,\e))=&F_0^{n_i}(t,x^i(t,z,0))+D_x F^{n_i}_0(t,x^i(t,z,0))\vspace{0.2cm}\\ &\cdot(x^i(t,z,\e)-x^i(t,z,0))+\CO(\e^2), \vspace{0.3cm}\\ F_1^{n_i}(t,x^i(t,z,\e))=&F_1^{n_i}(t,x^i(t,z,0))+\CO(\e), \end{array} \end{equation} for all $t^{i-1}(z,\e)\leq t\leq t^i(z,\e)$ and for every $i=1,2,\ldots,\kappa_z$. In what follows we give a sketch of the proof. \smallskip Let $\CL(\mu)=G\big(t,\mu x^i(t,z,\e)+(1-\mu)x^i(t,z,0)\big)$. So \begin{equation}\label{lab5} \begin{array}{RL} F_0^{n_i}(t,x^i(t,z,\e))=&F_0^{n_i}(t,x^i(t,z,0))+\CL_1(1)-\CL_1(0)\vspace{0.3cm}\\ =&F_0^{n_i}(t,x^i(t,z,0))+\int_0^1\CL_1'(\la_{1})d\la_{1}\vspace{0.3cm}\\ =&F_0^{n_i}(t,x^i(t,z,0))+\int_0^1D_x F_0^{n_i}(t,\ell_1(x^i(t,z,\e)))d\la_1\vspace{0.2cm}\\ &\cdot(x^i(t,z,\e)-x^i(t,z,0))\vspace{0.3cm}\\ =&\int_0^1\Big[D_x F_0^{n_i}(t,\ell_1(x^i(t,z,\e)))-D_x F_0^{n_i}(t,x^i(t,z,0))\Big]d\la_1\vspace{0.2cm}\\ &\cdot(x^i(t,z,\e)-x^i(t,z,0))+F_0^{n_i}(t,x^i(t,z,0))\vspace{0.2cm}\\ &+D_x F_0^{n_i}(t,x^i(t,z,0))\cdot(x^i(t,z,\e)-x^i(t,z,0)). \end{array} \end{equation} So observing that the function $D_x F_0^{n_i}(t,x)$ is locally Lipschitz in the variable $x$ and using Claim \ref{c2} in \eqref{lab5} we obtain the equality for $F_0^{n_i}$ of \eqref{F0F1}. The equality for $F_1^{n_i}(t,x)$ of \eqref{F0F1} is obtained directly by using Claim \ref{c2} together with the fact that this function is Lipschitz in the variable $x$. \smallskip From \eqref{F0F1} we obtain that \begin{equation}\label{F0} \begin{array}{RL} F_0^{n_i}(t,x^i(t,z,\e))=&F_0^{n_i}(t,x^i(t,z,0))+\e D_x F^{n_i}_0(t,x^i(t,z,0))\vspace{0.2cm}\\ &\cdot\dfrac{\p x^i}{\p\e}(t,z,0)+\CO(\e^2), \end{array} \end{equation} for all $t^{i-1}(z,\e)\leq t\leq t^i(z,\e)$ and for every $i=1,2,\ldots,\kappa_z$. For the moment we cannot use Claim \ref{c1} to ensure that $\dfrac{\p x^i}{\p\e}(t,z,0)=y_1(t,z)$ because it is only true when $t^{i-1}(z,0)\leq t\leq t^i(z,0)$. \smallskip Given $z\in C$ we have that, for every $t^{i-1}(z,\e)\leq t\leq t^i(z,\e)$, $x^i(t,z,\e)=x(t,z,\e)$ for $i=1,2,\ldots,\kappa_{\al}$. Moreover if $t^{i-1}(z,\e)\leq s<t^i(z,\e)$ and $\e\in[0,\e_0]$, then $F_j^{n_i}(s,x^i(s,z,\e))=F_j(s,x(t,z,\e))$ for $j=0,1$ and for every $i=1,2,\ldots,\ov{\kappa}$. \smallskip If $t^{\kappa-1}(z,\e)\leq t\leq t^{\kappa}(z,\e)$ from \eqref{F0F1} we compute \begin{equation}\label{F1} \begin{array}{L} \int_0^t F_1(s,x(s,z,\e))ds=\vspace{0.3cm}\\ \left(\sum_{i=1}^{\ov{\kappa}-1} \int_{t^{i-1}(z,\e)}^{t^{i}(z,\e)}F_1^{n_i}(s,x^i(s,z,\e))ds\right)+ \int_{t^{\ov{\kappa}-1}(z,\e)}^{t}F_1^{n_{\ov{\kappa}}}(s,x^{\ov{\kappa}} (s,z,\e))ds=\vspace{0.3cm}\\ \left(\sum_{i=1}^{\ov{\kappa}-1} \int_{t^{i-1}(z,\e)}^{t^{i}(z,\e)}F_1^{n_i}(s,x^i(s,z,0))ds\right)+ \int_{t^{\ov{\kappa}-1}(z,\e)}^{t}F_1^{n_{\ov{\kappa}}}(s,x^{\ov{\kappa}}(s,z,0))ds+\CO(\e)=\vspace{0.3cm}\\ \left(\sum_{i=1}^{\ov{\kappa}-1}\int_{t^{i-1}(z,0)}^{t^{i}(z,0)} F_1^{n_i}(s,x^i(s,z,0))ds\right)+ \int_{t^{\ov{\kappa}-1}(z,0)}^{t} F_1^{n_{\ov{\kappa}}}(s,x^{\ov{\kappa}}(s,z,0))ds+E_1(\e)\vspace{0.2cm}\\ +\CO(\e)=\vspace{0.3cm}\\ \left(\sum_{i=1}^{\ov{\kappa}-1}\int_{t^{i-1}(z,0)}^{t^{i}(z,0)} F_1(s,x(s,z,0))ds\right)+ \int_{t^{\ov{\kappa}-1}(z,0)}^{t} F_1(s,x(s,z,0))ds+E_1(\e)+\CO(\e)=\vspace{0.3cm}\\ \int_{0}^{t}F_1(s,x(s,z,0))ds+E_1(\e)+\CO(\e), \end{array} \end{equation} where \[ \begin{array}{RL} E_1(\e)=&\sum_{i=1}^{\ov{\kappa}-1}\left(\int_{t^{i-1}(z,\e)}^{t^{i-1}(z,0)} F_1^{n_i}(s,x^i(s,z,0))ds-\int_{t^{i}(z,\e)}^{t^{i}(z,0)}F_1^{n_i} (s,x^i(s,z,0))ds \right)\vspace{0.2cm}\\ &+ \int_{t^{\ov{\kappa}-1}(z,\e)}^{t^{\ov{\kappa}-1}(z,0)}F_1^{n_{\ov{\kappa}}} (s,x^{\ov{\kappa}}(s,z,0))ds. \end{array} \] Now, as in the case $E_0(\e)$ of the proof of Claim \ref{c2}, it is easy to see that there exists a constant $\widetilde E$ such that \begin{equation}\label{E} ||E_1(\e)||\leq \widetilde E \sum_{i=0}^{\ov{\kappa}-1}|t^i(z,0) - t^i(z,\e)|. \end{equation} So from Claim \ref{c1} we conclude that $E_1(\e)=o(\e)$ and consenquently $E_1(\e)=\CO(\e)$. Going back to inequality \eqref{F1} we obtain \begin{equation}\label{FF1} \int_0^t F_1(s,x(s,z,\e))ds=\int_0^t F_1(s,x(s,z,0))ds+\CO(\e). \end{equation} From Claim \ref{c1}, $\dfrac{\p x^i}{\p\e}(t,z,0)=y_1(t,z)$ for $t^{i-1}(z,0)\leq t\leq t^i(z,0)$, so from \eqref{F0} we compute \begin{equation}\label{iine1} \begin{array}{L} \int_0^t F_0(s,x(s,z,\e))ds =\vspace{0.3cm}\\ \sum_{i=1}^{\ov{\kappa}-1}\left( \int_{t^{i-1}(z,\e)}^{t^{i}(z,\e)}F_0^{n_i}(s,x^i(s,z,\e))ds\right)+ \int_{t^{\ov{\kappa}-1}(z,\e)}^{t}F_0^{n_{\ov{\kappa}}}(s,x^{\ov{\kappa}} (s,z,\e))ds=\vspace{0.3cm}\\ \sum_{i=1}^{\ov{\kappa}-1}\left( \int_{t^{i-1}(z,\e)}^{t^{i}(z,\e)}\big[F_0^{n_i}(s,x^i(s,z,0))+\e D_x F_0^{n_i}(s,x^i(s,z,0))\dfrac{\p x^i}{\p\e}(s,z,0)\big]ds\right)+\vspace{0.2cm}\\ \int_{t^{\ov{\kappa}-1}(z,\e)}^{t}\big[F_0^{n_{\ov{\kappa}}}(s,x^{\ov{\kappa}}(s,z,0))+\e D_x F_0^{n_{\ov{\kappa}}}(s,x^{\ov{\kappa}}(s,z,0))\dfrac{\p x^{\ov{\kappa}}}{\p\e}(t,z,0)\big]ds+\CO(\e^2)=\vspace{0.3cm}\\ \sum_{i=1}^{\ov{\kappa}-1}\left( \int_{t^{i-1}(z,0)}^{t^{i}(z,0)}\big[F_0^{n_i}(s,x^i(s,z,0))+\e D_x F_0^{n_i}(s,x^i(s,z,0))\dfrac{\p x^i}{\p\e}(t,z,0)\big]ds\right)+\vspace{0.2cm}\\ \int_{t^{\ov{\kappa}-1}(z,0)}^{t}\big[F_0^{n_{\ov{\kappa}}}(s,x^{\ov{\kappa}}(s,z,0))+\e D_x F_0^{n_{\ov{\kappa}}}(s,x^{\ov{\kappa}}(s,z,0))\dfrac{\p x^{\ov{\kappa}}}{\p\e}(t,z,0)\big]ds+E_2(\e)\vspace{0.2cm}\\ +\CO(\e^2)=\vspace{0.3cm}\\ \sum_{i=1}^{\ov{\kappa}-1}\left( \int_{t^{i-1}(z,0)}^{t^{i}(z,0)}\big[F_0(s,x(s,z,0))+\e D_x F_0^{n_i}(s,x(s,z,0))y_1(s,z)\big]ds\right)+\vspace{0.2cm}\\ \int_{t^{\ov{\kappa}-1}(z,0)}^{t}\big[F_0(s,x(s,z,0))+\e D_x F_0^{n_{\ov{\kappa}}}(s,x(s,z,0))y_1(s,z)\big]ds+E_2(\e)+\CO(\e^2) \end{array} \end{equation} The last equality comes from observing that $F_0^{n_i}(s,x^i(s,z,0))=F_0(s,x(s,z,0))$ for every $s\in[t^{i-1}(z,0),t^{i}(z,0))$ and $i=1,2,\ldots,\ov{\kappa}$. From definition \eqref{nota} the inequality \eqref{iine1} becomes \begin{equation}\label{ine1} \begin{array}{RL} \int_0^t F_0(s,x(s,z,\e))ds=&\int_{0}^{t}\big[F_0(s,x(s,z,0))+\e D_x F_0(s,x(s,z,0))\vspace{0.2cm}\\ &\cdot y_1(s,z)\big]ds+E_2(\e)+\CO(\e^2). \end{array} \end{equation} Here \[ \begin{array}{RL} E_2(\e)=&\sum_{i=1}^{\ov{\kappa}-1}\left(\int_{t^{i-1}(z,\e)}^{t^{i-1}(z,0)} \big[F_0^{n_i}(s,x^i(s,z,0))+\e D_x F_0^{n_i}(s,x^i(s,z,0))\dfrac{\p x^i}{\p\e}(t,z,0)\big]ds\right.\vspace{0.2cm}\\ &\left.-\int_{t^{i}(z,\e)}^{t^{i}(z,0)}\big[F_0^{n_i}(s,x(s,z,0))+\e D_x F_0^{n_i}(s,x^i(s,z,0))\dfrac{\p x^i}{\p\e}(t,z,0)\big]ds \right)\vspace{0.2cm}\\ &+ \int_{t^{\ov{\kappa}-1}(z,\e)}^{t^{\ov{\kappa}-1}(z,0)}\big[F_0^{n_{\ov{\kappa}}}(s,x^i(s,z,0))+\e D_x F_0^{n_{\ov{\kappa}}}(s,x^i(s,z,0))\dfrac{\p x^i}{\p\e}(t,z,0)\big]ds. \end{array} \] It is easy to see that there exists a constant $\widehat E$ such that \begin{equation}\label{E} ||E_2(\e)||\leq \widehat E \sum_{i=0}^{\ov{\kappa}-1}|t^i(z,0) - t^i(z,\e)|. \end{equation} So from Claim \ref{c1} it follows that $E_2(\e)=o(\e)$. Going back to inequality \eqref{ine1} we have \begin{equation}\label{ine4} \begin{array}{RL} \int_0^t F_0(s,x(s,z,\e))ds=&\int_{0}^{t}F_0(s,x(s,z,0))ds\vspace{0.2cm}\\ &+\e \int_{0}^{t} D_x F_0(s,x(s,z,0))y_1(s,z)ds+o(\e). \end{array} \end{equation} So from \eqref{x}, \eqref{FF1}, and \eqref{ine4} we conclude that \begin{equation}\label{sb1} \begin{array}{RL} x(t,z,\e)=&z+\int_0^t F_0(s,x(s,z,0))ds\vspace{0.2cm}\\ &+\e\int_0^t\big[ D_x F_0(s,x(s,z,0))y_1(s,z)+F_1(s,x(s,z,0))\big]ds+o(\e)\vspace{0.3cm}\\ =&x(s,z,0)+\e y_1(t,z)+o(\e). \end{array} \end{equation} The last equality is a simple consequence of the computations made in Claim \ref{c1}. Indeed from \eqref{ind5} and Claim \ref{c1} if $t^{\ell-1}(z,0)\leq t\leq t^{\ell}(z,0)$, then \[ y_1(t,z)=y_1(t^{\ell-1}(z,0),0)+\int_{t^{\ell-1}(z,0)}^t\big[D_xF_0(s,x(s,z,0))y_1(s,z)+F_1(s,x(s,z,0))\big]ds. \] From here, proceeding by induction on $\ell$, we obtain that \[ y_1(t,z)=\int_0^t\big[ D_xF_0(s,x(s,z,0))y_1(s,z)+F_1(s,x(s,z,0))\big]ds \] This completes the proof of Claim \ref{c3} and, consequently, the proof of the lemma. \end{proof} \begin{lemma}\label{l2} Under the hypothesis of Theorem \ref{MRt1} there exists a compact subset $Z$ of $C$ with $\CZ\subset Z^{\circ}$ such that the solution $x(t,z,0)$ of the unperturbed differential system \eqref{ups} is $\C^1$ in the variable $z$ for every $z\in Z$. Moreover $(\p x/\p z)(t,z,0)=Y(t,z)Y(0,z)^{-1}$. The set $Z$ is defined in the statement of Lemma \ref{l1} and $Y$ is the fundamental matrix solution of \eqref{lin}. \end{lemma} \begin{proof} It is easy to see that there exists a compact subset $Z$ of $C$ such that $\CZ\subset Z^{\circ}$. Given $z\in Z$ the solution of the unperturbed system \eqref{ups} starting at $z$ is given by \eqref{xx} by taking $\e=0$. Since $C\cap\p\Sigma_0=\emptyset$ and $Z\subset C$, there exists a neighborhood $U^0\subset C$ of $z$ such that for every $\zeta\in U^0$ the local flow of the unperturbed system \eqref{ups} starting at the point $\zeta$ is given by $x^1(t,\zeta,0)$. We know that $(t^i(z),x(t^i(z),z,0))\in\Sigma^c$ for $i=1,2,\ldots,\kappa_z-1$. Since $\Sigma^c$ is an open subset of $\Sigma$ we conclude that there exist $U^i\subset \Sigma^c$ neighborhoods of $x(t^i(z),z,0)$ in $\Sigma$ for $i=1,2,\ldots,\kappa_z-1$. For $i=\kappa_z$ we have that $x(t^{\kappa_z}(z),z,0)=z$, so we take $U^{\kappa_z}=U^0$. Moreover, for each $\zeta\in U^i$ the locally flow of the unperturbed system \eqref{ups} starting in $\zeta$ is given by $x^{i+1}(t,\zeta,0)$ for $i=1,2,\ldots,\kappa_z$. Therefore we can choose a small neighborhood $U_z\subset U^0$ such that for every $\zeta\in U_z$, $x(t^{i}(\zeta),\zeta,0)\in U^i$ for $i=1,2,\ldots,\kappa_z$. Hence we conclude that for each $z\in C$ there exists a small neighborhood $U_z\subset C$ of $z$ such that the solution $t\mapsto x(t,\zeta,0)$ can be written as \eqref{xx} for every $\zeta\in U_z$ having the same number $\kappa_z$ of $\C^1$ pieces. \smallskip Let $\f_n(t,t_0,x_0)$ be the solution of the differential equation $x'=F_0^n(t,x)$ such that $\f_n(t_0,t_0,x_0)=x_0$. From the results of the differential dependence of the solutions we conclude that each of these functions are of class $\C^1$ in the variables $(t,t_0,x_0)$. Indeed the function $F_0^n$ is $\C^1$ for $i=1,2,\ldots,\kappa_z$. From Claim \ref{c1} of the proof of Lemma \ref{l1} for $i=1,2,\ldots,\kappa_z$ the function $t^i(\zeta,\e)$ is of class $\C^1$, for every $\zeta\in U_z$ and $\e\in[0,\e_0]$. \smallskip From \eqref{rec} we have that \begin{equation}\label{in1} \begin{array}{L} x^1(t,\zeta,0)=\f_{n_1}(t,0,\zeta) \quad \textrm{and}\vspace{0.2cm}\\ x^i(t,\zeta,0)=\f_{n_i}(t,t^{i-1}(\zeta,0),x^{i-1}(t^{i-1}(\zeta,0),\zeta,0)), \end{array} \end{equation} for $\zeta\in U_z$ and for $i=2,3,\ldots,\kappa_z$. So for $i=1$ the function $(t,\zeta)\mapsto x^1(t,\zeta,0)=\f_{n_1}(t,0,\zeta)$ is $\C^1$. Moreover for $0\leq t\leq t^1(\zeta,0)$ we have that $\dfrac{\p x^1}{\p z}(t,\zeta,0)=Y(t,\zeta)$. Indeed from \eqref{xi} we have that \begin{equation}\label{lab4} \begin{array}{RL} \dfrac{\p}{\p t}\left(\dfrac{\p x^1}{\p z}(t,\zeta,0)\right)&=D_x F_0^{n_1}(t,x^1(t,\zeta,0))\dfrac{\p x^1}{\p z}(t,\zeta,0)\vspace{0.2cm}\\ &=D_x F_0(t,x(t,\zeta,0))\dfrac{\p x^1}{\p z}(t,\zeta,0), \end{array} \end{equation} for $0\leq t\leq t^1(\zeta,0)$. So solving the linear differential equation \eqref{lab4} we have that the $\dfrac{\p x^1}{\p z}(t,\zeta,0)$ is a fundamental matrix solution of system \eqref{lin} for $0\leq t\leq t^1(\zeta,0)$ and $\zeta\in U_z$. Since $\dfrac{\p x^1}{\p z}(0,\zeta,0)$ is the identity matrix, we conclude that $\dfrac{\p x^1}{\p z}(t,\zeta,0)=Y(t,\zeta) Y(0,z)^{-1}$ for $0\leq t\leq t^1(\zeta,0)$ and $\zeta\in U_z$. \smallskip We assume by induction hypothesis that the function $\zeta\mapsto x^{\ell-1}(t,\zeta,0)$ is $\C^1$ for each $t\in\s^1$, and that for $t^{\ell-2}(\zeta,0)\leq t\leq t^{\ell-1}(\zeta,0)$ the equality $\dfrac{\p x^{\ell}}{\p z}(t,\zeta,0)=Y(t,\zeta) Y(0,z)^{-1}$ holds. \smallskip From \eqref{in1} we have that, for $i=\ell$, $x^{\ell}(t,\zeta,0)=\f_{n_{\ell}}(t,t^{\ell-1}(\zeta,0),x^{\ell-1}(t^{\ell-1}(\zeta,0),\zeta,$ $0))$. So the the function $\zeta\mapsto x^{\ell}(t,\zeta,0)$ is $\C^1$ because from the induction hypothesis it is composition of $\C^1$ functions. Now, we have \[ \begin{array}{RL} \dfrac{\p}{\p t}\left(\dfrac{\p x^{\ell}}{\p z}(t,\zeta,0)\right)&=D_x F_0^{n_{\ell}}(t,x^{\ell}(t,\zeta,0))\dfrac{\p x^{\ell}}{\p z}(t,\zeta,0)\vspace{0.2cm}\\ &=D_x F_0(t,x(t,\zeta,0))\dfrac{\p x^{\ell}}{\p z}(t,\zeta,0), \end{array} \] for $t^{\ell-1}(\zeta,0)\leq t\leq t^{\ell}(\zeta,0)$. Solving this linear differential equation we get that \[ \begin{array}{RL} \dfrac{\p x^{\ell}}{\p z}(t,\zeta,0)=&Y(t,\zeta)Y(t^{\ell-1}(\zeta,0),\zeta)^{-1}\dfrac{\p x^{\ell}}{\p z}(t^{\ell-1}(\zeta,0),\zeta,0)\vspace{0.2cm}\\ =&Y(t,\zeta)Y(0,\zeta)^{-1}, \end{array} \] for $t^{\ell-1}(\zeta,0)\leq t\leq t^{\ell}(\zeta,0)$ and $\zeta\in U_z$. The last equality comes from the induction hypothesis because \[ \dfrac{\p x^{\ell}}{\p z}(t^{\ell-1}(\zeta,0),\zeta,0)=\dfrac{\p x^{\ell-1}}{\p z}(t^{\ell-1}(\zeta,0),\zeta,0)=Y(t^{\ell-1}(\zeta,0),\zeta)Y(0,\zeta)^{-1}. \] \smallskip The above induction proved that for every $z\in Z$, $x^i(t,z,0)$ is a $\C^1$ function in the second variable and $\dfrac{\p x^i}{\p z}(t,z,0)=Y(t,z)Y(0,z)^{-1}$, provided that $t^{i-1}\leq t\leq t^{i}$. We conclude the proof of the lemma by observing that for $z\in Z$ and $t\in\s^1$ there exists $\ell\in\{1,2,\ldots,\kappa_z\}$ such that $t^{\ell-1}(z,0)\leq t\leq t^{i}(z,0)$, hence $x(t,z,0)=x^{\ell}(t,z,0)$. \end{proof} \begin{lemma}\label{l3} Under the hypotheses of Theorem \ref{MRt1} there exists a small parameter $\ov{\e}\in[0,\e_0]$ such that for every $\e\in[0,\ov{\e}]$ the function $z\mapsto x(T,z,\e)$ is locally Lipshchitz for $z\in Z$. The parameter $\e_0$ is defined in the statement of Lemma \ref{l1} and the set $Z$ is defined in the statement of Lemma \ref{l2}. \end{lemma} \begin{proof} From Lemma \ref{l2} we have that for each $z\in Z$ there exists a small neighborhood $U_z\subset C$ of $z$ such that the solution $t\mapsto x(t,\zeta,0)$ can be written as \eqref{xx} for every $\zeta\in U_z$ having the same number $\kappa_z$ of $\C^1$ pieces. Therefore applying the result of the continuous dependence of the solutions on the parameters in each differentiable piece we conclude that for each $z\in Z$ there exists a small neighborhood $\U_z\subset U_z$ and a small parameter $\ov{\e}_z\in(0,\e_0]$ such that the solution $t\mapsto x(t,\zeta,\e)$ can be written as \eqref{xx} for every $\zeta\in \U_z$ and for each $\e\in(0,\e_z]$ having the same number $\kappa_z$ of $\C^1$ pieces. Since $Z$ is a compact set we can choose $\ov{\e}$ a minimal parameter of $\ov{\e}_z$ for $z\in Z$ such that the above result holds for every $\e\in[0,\ov{\e}]$. \smallskip Let $\psi_n(t,t_0,x_0,\e)$ be the solution of the differential equation \begin{equation}\label{part} x'=F^n(t,x)=F_0^n(t,x)+\e F_1^n(t,x)+\e^2 R^n(t,x,\e), \end{equation} such that $\psi_n(t_0,t_0,x_0,\e)=x_0$. Clearly $\psi_n(t,t_0, $ $x_0,0)=\f_n(t,t_0,x_0)$ which has been defined in Lemma \ref{l2}. From the result of the continuous dependence of the solutions on the initial conditions we conclude that each of these functions are continuous in the variables $(t,t_0,x_0)$. Indeed $F^n$ is a continuous function which is Lipschitz in the second variable for $i=1,2,\ldots,\kappa_z$. Moreover using the Gronwall Lemma (see, for instance, \cite{SVM}) we conclude that \begin{equation}\label{lips} ||\psi_n(t,s_1,s_1,\e)-\psi_n(t,t_2,z_2,\e)||\leq Me^{LT}|t_1-t_2|+e^{LT}||x_1-x_2||, \end{equation} for each $t,s_1,s_2\in\s^1$, $z_1,z_2\in \U_z$, and $\e\in[0,\ov{\e}]$, where the constant $L$ and $M$ are defined in the proof of Lemma \ref{l1}. From the flow properties of the solutions of system \eqref{part} we have that the equality \begin{equation}\label{flow} \psi_n(t+s,t_0,x,\e)=\psi_n(t,t_0,\psi_n(s+t_0,t_0,x,\e),\e) \end{equation} holds for every $n=1,2,\ldots,N$. \smallskip Given $t_1,t_2,s_1,s_2\in\s^1$, $z_1,z_2\in U_z$ and $\e\in[0,\ov{\e}]$ we can prove that the inequality \begin{equation}\label{mine} \begin{array}{RL} ||\psi_n(t_1,s_1,z_1,\e)-\psi_n(t_2,s_2,z_2,\e)||\leq&Me^{LT}|t_1-t_2|+Me^{LT}|s_1-s_2|\vspace{0.2cm}\\ &+e^{LT}||z_1-z_2||. \end{array} \end{equation} holds for $n=1,2,\ldots,N$. Indeed, from \eqref{lips} and \eqref{flow} \[ \begin{array}{L} ||\psi_n(t_1,s_1,z_1,\e)-\psi_n(t_2,s_2,z_2,\e)||=\vspace{0.3cm}\\ ||\psi_n(t_1,s_1,z_1,\e)-\psi_n(t_1,s_2,\psi_n(t_2-t_1+s_2,s_2,z_2,\e),\e)||\leq\vspace{0.3cm}\\ e^{LT}\Big(||z_1-\psi_n(t_2-t_1+s_2,s_2,z_2,\e)||+M|s_1-s_2|\Big)=\vspace{0.3cm}\\ e^{LT}\Big(||\psi_n(t_2-t_1+s_2,t_2-t_1+s_2,z_1,\e)-\psi_n(t_2-t_1+s_2,s_2,z_2,\e)||\vspace{0.2cm}\\ +M|s_1-s_2|\Big)\leq\vspace{0.3cm}\\ e^{LT}\left(||z_1-z_2||+M|t_1-t_2|+M|s_1-s_2|\right). \end{array} \] \smallskip Again from \eqref{rec} we obtain \begin{equation}\label{in1} \begin{array}{L} x^1(t,\zeta,\e)=\psi_{n_1}(t,0,\zeta,\e) \quad \textrm{and}\vspace{0.2cm}\\ x^i(t,\zeta,0)=\psi_{n_i}(t,t^{i-1}(\zeta,\e),x^{i-1}(t^{i-1}(\zeta,\e),\zeta,\e),\e), \end{array} \end{equation} for $\zeta\in \U_z$ and for $i=2,3,\ldots,\kappa_z$. Thus from \eqref{in1} for $i=1$ the function $x^1(t,\zeta,\e)=\f_{n_1}(t,0,\zeta)$. So from \eqref{mine} we have that \[ \begin{array}{RL} ||x^1(t_1,z_1,\e)-x^1(t_2,z_2,\e)||=&||\psi_{n_1}(t_1,0,z_1,\e)-\psi_{n_1}(t_2,0,z_2,\e)||\vspace{0.2cm}\\ \leq&e^{LT}\left(||z_1-z_2||+M|t_1-t_2|\right), \end{array} \] for every $z_1,z_2\in \U_z$, $0\leq t_1\leq t^1(z_1,\e)$, $0\leq t_2\leq t^1(z_2,\e)$, and $\e\in[0,\ov{\e}]$. \smallskip We assume by induction hypothesis that there exist constants $A_{\ell-1}$ and $B_{\ell-1}$ such that \[ ||x^{\ell-1}(t_1,z_1,\e)-x^{\ell-1}(t_2,z_2,\e)||\leq A_{\ell-1}|t_1-t_2|+B_{\ell-1}||z_1-z_2||, \] for every $z_1,z_2\in \U_z$, $t^{\ell-2}(z_1,\e)\leq t_1\leq t^{\ell-1}(z_1,\e)$, $t^{\ell-2}(z_2,\e)\leq t_2\leq t^{\ell-1}(z_2,\e)$, and $\e\in[0,\ov{\e}]$. \smallskip From \eqref{in1} we have, for $i=\ell$, that $x^{\ell}(t,\zeta,\e)=\psi_{n_{\ell}}(t,t^{\ell-1}(\zeta,\e),x^{\ell-1}(t^{\ell-1}(\zeta,\e),\zeta,$ $\e),\e)$ for $\zeta\in\U_z$, $t^{\ell-1}(\zeta,\e)\leq t\leq t^{\ell}(\zeta,\e)$ and $\e\in[0,\ov{\e}]$. So from induction hypothesis we obtain that \begin{equation}\label{inequal1} \begin{array}{L} ||x^{\ell}(t_1,z_1,\e)-x^{\ell}(t_2,z_2,\e)||=||\psi_{n_{\ell}}(t_1,t^{\ell-1}(z_1,\e),x^{\ell-1}(t^{\ell-1}(z_1,\e),z_1,$ $\e),\e)\vspace{0.2cm}\\ -\psi_{n_{\ell}}(t_2,t^{\ell-1}(z_2,\e),x^{\ell-1}(t^{\ell-1}(z_2,\e),z_2,$ $\e),\e)||\leq\vspace{0.3cm}\\ Me^{LT}|t_1-t_2|+Me^{LT}|t^{\ell-1}(z_1,\e)-t^{\ell-1}(z_2,\e)|\vspace{0.2cm}\\ +e^{LT}||x^{\ell-1}(t^{\ell-1}(z_1,\e),z_1,$ $\e)-x^{\ell-1}(t^{\ell-1}(z_2,\e),z_2,$ $\e)||\leq\vspace{0.3cm}\\ Me^{LT}|t_1-t_2|+e^{LT}(M+A_{\ell-1})|t^{\ell-1}(z_1,\e)-t^{\ell-1}(z_2,\e)|+e^{LT}B_{\ell-1}||z_1-z_2|| \end{array} \end{equation} for every $z_1,z_2\in \U_z$, $t^{\ell-1}(z_1,\e)\leq t_1\leq t^{\ell}(z_1,\e)$, $t^{\ell-1}(z_2,\e)\leq t_2\leq t^{\ell}(z_2,\e)$, and $\e\in[0,\ov{\e}]$. \smallskip From Claim 1 of the proof of Lemma \ref{l1} we have that $t^{\ell-1}(z,\e)$ is a $\C^1$ function, then there exists a constant $\de>0$ such that $|t^{\ell-1}(z_1,\e)-t^{\ell-1}(z_2,\e)|\leq \de ||z_1-z_2||$ for every $\e\in[0,\ov{\e}]$. Going back to the inequality \eqref{inequal1} we get \[ ||x^{\ell}(t_1,z_1,\e)-x^{\ell}(t_2,z_2,\e)||\leq A_{\ell}|t_1-t_2|+B_{\ell}||z_1-z_2||, \] for every $z_1,z_2\in \U_z$, $t^{\ell-1}(z_1,\e)\leq t_1\leq t^{\ell}(z_1,\e)$, $t^{\ell-1}(z_2,\e)\leq t_2\leq t^{\ell}(z_2,\e)$, and $\e\in[0,\ov{\e}]$, where $A_{\ell}=Me^{LT}$ and $B_{\ell}=e^{LT}\left(\de (M+A_{\ell-1})+B_{\ell-1}\right)$. \smallskip We conclude the proof of the lemma by observing that $x(T,z,\e)=x^{\kappa_z}(T,z,\e)$ which, from the above induction, is locally Lipschitz in the variable $z$. \end{proof} \begin{lemma}\label{l4} Under the hypothesis of Theorem \ref{MRt2} the solution $x(t,z,\e)$ of the unperturbed differential system \eqref{ups} is $\C^2$ in the variable $z$ for every $z\in Z$. Moreover $(\p x/\p z)(t,z,0)=Y(t,z)Y(0,z)^{-1}$. The set $Z$ is defined in the statement of Lemma \ref{l2} and $Y$ is the fundamental matrix solution of \eqref{lin}. \end{lemma} \begin{proof} Assuming the hypothesis $(h1)$ instead $(H1)$ we can prove analogously to Claim \ref{c1} in the proof of Lemma \ref{l1} that given $z\in Z$ the function $t^i(z,\e)$ for $i=0,1,2,\cdots,\kappa_z$ is of class $\C^2$ for every $\zeta$ in a neighborhood $U_z\subset C$ of $z$ and $\e\in[0,\e_0]$. Then the proof of the lemma follows analogous the proof of Lemma \ref{l2} but considering the functions $\psi_n(t,t_0,x_0,\e)$ defined in Lemma \ref{l3}. \end{proof} The next two lemmas are versions of the so called {\it Lyapunov--Schmidt reduction} for finite dimensional function (see for instance \cite{C}) and its proof can be found in \cite{BLM} and \cite{BGL}, respectively. The first lemma will be used for proving Theorem \ref{MRt1}, and the second one will be used for proving Theorem \ref{MRt2}. \begin{lemma}\label{LS} Let $P:\R^d\rightarrow\R^d$ be a $\C^1$ function, and let $Q:\R^d\times[0,\e_0]\rightarrow\R^d$ be a continuous functions which is locally Lipschitz in the first variable, and define $f:\R^d\times[0,\e_0]\rightarrow\R^d$ as $f(z,\e)=P(z)+\e Q(z,\e)$. We assume that there exists an open and bounded subset $V\subset \R^k$ with $k\leq n$ and a $\CC^1$ function $\be_0:\ov V\rightarrow\R^{d-k}$ such that $P$ vanishes on the set $\CZ=\{z_{\al}=(\al,\be_0(\al)):\,\al\in\ov V\}$ and that for any $\al\in \ov V$ the matrix $D P(z_{\al})$ has in its upper right corner the null $k\times(d-k)$ matrix and in the lower corner the $(d-k)\times(d-k)$ matrix $\Delta_{\al}$ with $\det(\Delta_{\al})\neq0$. For any $\al\in\ov V$ we define $f_1(\al)=\pi Q(z_{\al},0)$. Thus if $f_1(\al)\neq0$ for all $\al\in\p V$ and $d(f_1,V,0)\neq 0$, then there exists $\e_1>0$ sufficiently small such that for each $\e\in(0,\e_1]$ there exists at least one $z_{\e}\in\R^d$ with $F(z_{\e},\e)=0$ and $\dis(z_{\e},\CZ)\to 0$ as $\e\to 0$. \end{lemma} \begin{lemma}\label{LS1} Let $P:\R^d\rightarrow\R^d$ and $Q:\R^d\times[0,\e_0]\rightarrow\R^d$ be $\C^2$ functions, and define $f:\R^d\times[0,\e_0]\rightarrow\R^d$ as $f(z,\e)=P(z)+\e Q(z,\e)$. We assume that there exists an open and bounded subset $V\subset \R^k$ with $k\leq n$ and a $\C^2$ function $\be_0:\ov V\rightarrow\R^{d-k}$ such that $P$ vanishes on the set $\CZ=\{z_{\al}=(\al,\be_0(\al)):\,\al\in\ov V\}$ and that for any $\al\in \ov V$ the matrix $D P(z_{\al})$ has in its upper right corner the null $k\times(d-k)$ matrix and in the lower corner the $(d-k)\times(d-k)$ matrix $\Delta_{\al}$ with $\det(\Delta_{\al})\neq0$. For any $\al\in\ov V$ we define $f_1(\al)=\pi Q(z_{\al},0)$. Thus if there exists $a\in V$ with $f_1(a)\neq0$ and $\det(f'(a))\neq0$, then there exists $\al_{\e}$ such that $f(z_{\al_{\e}},\e)=0$ and $z_{\al_{\e}}\to z_a$ as $\e\to 0$. \end{lemma} Now we are ready to prove our main results. \begin{proof}[Proof of Theorem \ref{MRt1}] We consider the $\C^1$ function $f:Z\times [0,\e_0]\rightarrow \R^d$, given by \begin{equation}\label{f} f(z,\e)=x(T,z,\e)-z. \end{equation} Its differentiability comes from Lemma \ref{l2}. Clearly system \eqref{MRs1} for $\e=\ov{\e}\in[0,\e_0]$ has a periodic solution passing through $\ov z\in C$ if and only if $f(\ov z,\ov{\e})=0$. \smallskip From Lemma \ref{l1} we have that \begin{equation}\label{xtay} x(t,z,\e)=x(t,z,0)+\e y_1(t,z)+o(\e). \end{equation} Taking $P(z)=x(t,z,0)-z$ and $Q(z,\e)=y_1(t,z)+\tilde{o}(\e)$, thus $f(z,\e)=P(z)+\e Q(z,\e)$. Moreover from Lemma \ref{l2} $P(z)$ is a $\C^1$ function, and from Lemma \ref{l3} $Q(z,\e)$ is a continuous function which is locally Lipschitz in the first variable because $Q(z,\e)=(x(T,z,\e)-x(T,z,0))/\e$. In order to apply Lemma \ref{LS} to function \eqref{f} we compute \[ P(z_{\al})=x(T,z_{\al},0)-z_{\al}=0, \] and \[ \begin{array}{RL} \dfrac{\p P}{\p z}(z_{\al})=&\dfrac{\p x} {\p z}(T,z_{\al},0)-Id\vspace{0.2cm}\\ =&Y_{\al}(T)Y_{\al}(0)^{-1}-Id. \end{array} \] So from hypothesis $(H)$ the function $P$ vanishes on the set $\CZ$ and from hypothesis $(H2)$ for any $\al\in\ov V$ the matrix $DP(z_{\al})$ has in its upper right corner the null $k\times(d-k)$ matrix and in the lower corner the $(d-k)\times(d-k)$ matrix $\Delta_{\al}$ with $\det(\Delta_{\al})\neq 0$. Since $\pi Q(\al,\be_0(\al))=\pi y_1(T,z_{\al})=f_1(\al)$, so the proof follows applying Lemma \ref{LS}. \end{proof} \begin{proof}[Proof of Theorem \ref{MRt2}] The proof is analogous to the proof of Theorem \ref{MRt1} applying Lemma \ref{l4} instead of Lemmas \ref{l2} and \ref{l3}, and applying Lemma \ref{LS1} instead of Lemma \ref{LS}. \end{proof} \section{Proof of Proposition \ref{p1}}\label{PP} \begin{proof}[Proof of Proposition \ref{p1}] Proceeding with the change of variables $(u,v,w)=(r\cos\T,$ $r\sin\T,z)$ and taking $\T$ as the new time by doing $r'=\dot{r}/\dot{\T}$ and $z'\dot{z}/\dot{\T}$ we obtain \begin{equation}\label{qs1} (r',z')=\left\{ \begin{array}{L} (0,z)+\e\,G^+(\T,r,z)+\CO(\e^2) \quad \textrm{if}\quad 0\leq \T\leq \pi, \\ (0,z)+\e\,G^-(\T,r,z)+\CO(\e^2) \quad \textrm{if}\quad \pi\leq\T\leq 2\pi, \\ \end{array}\right. \end{equation} where $G^{\pm}=\left(G^{\pm}_1,G^{\pm}_2\right)$, and \[ \begin{array}{RL} G_1^{\pm}=&b_1^{\pm} r\cos^2\T+\left(a_1^{\pm}+d_1^{\pm}z+(b_2^{\pm}+c_1^{\pm})r\sin\T\right)\cos\T\vspace{0.2cm}\\ &+\left(a_2^{\pm}+d_2^{\pm}z+c_2^{\pm} r\sin\T\right)\sin\T,\vspace{0.3cm}\\ G_2^{\pm}=&\dfrac{1}{r}\left(r(a_3^{\pm}+d_3^{\pm}z)-b_2^{\pm}rz\cos^2\T+(C_3^{\pm}r^2+(a_1^{\pm}+d_1^{\pm}z)z+c_1^{\pm}r\sin\T)\sin\T\vspace{0.2cm}\right.\\ &\left.(b_3^{\pm}r^2-(a_2^{\pm}+d_2^{\pm})z+(b_1^{\pm}-c_2^{\pm})rz\sin\T)\cos\T\right). \end{array} \] Here the prime denotes the derivative with respect to $\T$. \smallskip For system \eqref{qs1} we have that $D=\{(r,z):\,r>0,\,z\in\R\}$ and $T=2\pi$. We note that $\Sigma=\{(0,r):\,r>0\}\cup\{(\pi,r):\,r>0\}\cup\{(2\pi,r):\,r>0\}$, thus taking $h(\T,r,z)=\T(\T-\pi)(\T-2\pi)$ it follows that $\Sigma=h^{-1}(0)$. \smallskip In what follows we shall study the elements of hypothesis $(H)$ of Theorem \ref{MRt1}. For $\e=0$ the solution $x(\T,r,z,0)$ of system \eqref{qs1} such that $x(0,r,z,0)=(r,z)$ is given by $x(\T,r,z,0)=(r,e^{\T}z)$. Taking $V=\{r\in\R:\,r_1<\al<r_2\}$ with $r_1>0$ arbitrarily small and $r_2>r_1$ arbitrarily large, and $\be_0=0$ we have that the solution $x_{\al}(\T)=(\al,0)$ is constant for every $\al\in\ov V$, particularly $2\pi$--periodic. In this case the manifold $\CZ$ of periodic solution of the system \eqref{qs1} when $\e=0$ is given by $\CZ=\{(\al,0):\,r_1\leq\al\leq r_2\}$, and $\Sigma_0=D$. Since $\CZ\subset \Sigma_0$ it follows that $\CZ\cap\p\Sigma_0=\emptyset$. Moreover computing the crossing region of system \eqref{qs1} for $\e>0$ sufficiently small we conclude that $\Sigma^c=\Sigma$, so we obtain that $\widetilde{\CZ}_0\cap\Sigma\subset\Sigma^c$. Therefore hypothesis $(H)$ hods for system \eqref{qs1}. \smallskip Hypothesis $(H1)$ of Theorem \ref{MRt1} clearly holds for system \eqref{qs1}. To verify hypothesis $(H2)$ we take \[ Y(\T,r,z)=\dfrac{\p x}{\p z}(\T,r,z,0)=\left(\begin{array}{CC}1&0\\0&e^{\T}\end{array}\right) \] as the fundamental matrix solution of system \eqref{lin} in the case of system \eqref{qs1}. So \[ Y_{\al}(2\pi)Y_{\al}(0)^{-1}-Id=Y(2\pi,\al,0)Y(0,\al,0)^{-1}-Id=\left(\begin{array}{CC}0&0\\0&e^{2\pi}-1\end{array}\right). \] Since $\Delta_{\al}=e^{2\pi}-1\neq 0$ for every $\al\in\ov V$ it follows that hypothesis $(H2)$ holds for system \eqref{qs1}. \smallskip Now if $(\T,r,z)\in\Sigma$, then $\T\in\{0,\pi\}$. On the other hand $\nabla h(0,r,z)=(2\pi^2,0,0)$ and $\nabla h(\pi,r,z)=(-\pi^2,0,0)$ for every $(r,z)\in D$. So $\langle\nabla h(\T,r,z),(0,v)\rangle=0$ for every $\T\in\{0,\pi\}$, $(r,z)\in D$, and $v\in\R^2$, which means that for any $v\in\R^2$ we have that $(0,v)\in T_{(\T,r,z)}\Sigma$ for every $\T\in\{0,\pi\}$ and $(r,z)\in D$. In short hypothesis $(H3)$ holds for system \eqref{qs1}. \smallskip Using an algebraic manipulator as Mathematica or Maple we compute \[ f_1(\al)=\dfrac{\pi}{2}\left(b_1^++b_1^-+c_2^++c_2^-\right)\al+2\left(a_2^+-a_2^-\right). \] From hypotheses $\left(b_1^++b_1^-+c_2^++c_2^-\right)\left(a_2^--a_2^+\right)>0$, thus \[ a=\dfrac{4\left(a_2^--a_2^+\right)}{\pi\left(b_1^++b_1^-+c_2^++c_2^-\right)} \] is a solutions of the equation $f_1(\al)=0$ such that $f_1'(a)\neq0$. From Remark \ref{bc} it is a sufficient condition to guarantee the existence of a small neighborhood $W\subset V$ of $a$ such that $d(f_1,W,0)\neq 0$. Since $f_1$ is linear, it is clear that $f_1(\al)\neq 0$ for every $\al\in\p W$. Therefore hypothesis $(H4)$ of Theorem \ref{MRt1} holds for system \eqref{qs1}. \smallskip Now the proof of the proposition follows directly by applying Theorems \ref{MRt1} and \ref{MRt2}. \end{proof} \section*{Acknowledgements} The first author is partially supported by a MINECO/FEDER grant MTM2008--03437, an AGAUR grant number 2014SGR 568, an ICREA Academia, FP7--PEOPLE--2012--IRSES--316338 and 318999, and FEDER/UNAB10--4E--378. The second author is partially supported by a FAPESP grant 2013/16492--0 and by a CAPES CSF-PVE grant 88881.030454/2013-01.
{ "timestamp": "2015-04-14T02:09:33", "yymm": "1504", "arxiv_id": "1504.03008", "language": "en", "url": "https://arxiv.org/abs/1504.03008" }
\section{Introduction} A quantum walk (QW) is defined as the quantum analogue of a classical random walk, where the ``quantum walker'' is in a superposition of states instead of being described by a probability distribution. One of the earliest realization of this concept was proposed by Feynman as a discrete version of the massive Dirac equation \cite{FEY}. In recent times, there has been a surge of interest for this topic, due to the conceptual and possible practical import of QW's as discrete realizations of stochastic quantum processes and because they can solve certain problems with an exponential speedup, i.e. using exponentially less operations than classical computations \cite{childs2003exponential}. Moreover, QW's are amenable to a number of experimental realizations, such as ion traps \cite{PhysRevLett.103.090504,PhysRevLett.104.100503,gerritsma2010quantum}, liquid-state nuclear-magnetic-resonance quantum-information processor \cite{PhysRevA.72.062317}, photonic devices \cite{aspuru2012photonic} and other types of optical devices \cite{QWE}. As a result, they hold promise of playing an important role in many areas of modern physics and quantum technology, such as quantum computing, foundational quantum mechanics and biophysics \cite{QW4}. One of the most interesting features of discrete QW's is their continuum limit, which recovers a broad variety of relativistic quantum wave equations \cite{QW1,QW2,QW3}. As stated earlier, this was first discussed by Feynman and is now known as Feynman's checkerboard \cite{feyn_chess}. This was originally formulated for the free Dirac equation but extensions of these ideas, which include the coupling to external fields, have been investigated \cite{QW1,QW2,QW3}. Pioneering work have been performed in which the Dirac equation is related to cellular automata \cite{PhysRevD.49.6920,meyer1996}. Lately, the link between QW's and the Dirac equation have been discussed extensively \cite{QW1,QW2,QW3,1751-8121-47-46-465302}. In these studies, the starting point is a general QW formulation from which the continuum limit is evaluated and then related to the Dirac equation. In this article, QW's and their relations to some known numerical schemes of the Dirac equation are reviewed from a slightly different perspective: it will be demonstrated that the most general QW's are obtained from lattice discretizations of the relativistic quantum wave equation for spin-1/2 particles. More precisely, starting from the continuum Dirac equation, it is shown that QW's can be placed in one-to-one correspondence with numerical schemes based on operator splitting and the QLB scheme. These numerical methods have been developed and employed as efficient numerical tools to solve relativistic quantum mechanics problems on classical computers \cite{FillionGourdeau20121403,QLB,FIL}. They have a number of interesting properties: they are easy to code, they can be easily parallellized and are very versatile. Moreover, their mathematical structure and the fact that the time discretization is realized by a set of unitary transformations makes the link to QW's possible. This connection is explored below and represents one of the main purposes of this article. Much of the material discussed here, in particular the numerical simulations, is not completely new, which is in line with the main purpose of this paper, namely an attempt to bridge the techniques utilized in numerical analysis and (quantum) Lattice Boltzmann theory to the language of quantum computing. This article is organized as follows. In Section \ref{sec:QW}, a general formulation of QW is presented, where the transfer matrix is time and space dependent. In Section \ref{sec:split}, the split operator method for the Dirac equation is presented, along with its exact correspondence with QW. Section \ref{sec:QLB} is devoted to the QLB method and connections with QW. Section \ref{sec:quantum_sim} is devoted to a qualitative discussion of the link between these numerical schemes and quantum computation. In Section \ref{sec:qe}, the schemes are casted in the form of a propagation-relaxation process and the notion of quantum equilibrium is introduced. Based on the analogy between QW and QLB, a new QLB scheme for the (1+1) Dirac equation in curved space is proposed in Section \ref{sec:QLBc}. Finally, the generalization of these methods to many dimensions is briefly discussed and numerical results are presented in Section \ref{sec:multiD} \section{Quantum walks} \label{sec:QW} Let us consider a $(1+1)$ quantum walk on the line for a pair of complex wave functions (bi-spinor $\psi$), obeying a discrete space-time evolution equations described by the following discrete map \cite{QW1,QW2,QW3,PhysRevA.77.032326}: \begin{eqnarray} \label{QW} \begin{bmatrix} \psi^{n+1}_{1,j} \\ \psi^{n+1}_{2,j} \end{bmatrix} = B_{j,n} \begin{bmatrix} \psi^{n}_{1,j-1} \\ \psi^{n}_{2,j+1} \end{bmatrix}. \end{eqnarray} Here, the indices $j,m \in \mathbb{N} \otimes \mathbb{Z}$ label points on a discretization of space and time, respectively. The object $B_{j,n}$ is a two-by-two matrix with components \cite{QW3} \begin{eqnarray} B_{j,n}:= e^{-i \xi_{j,n}} \begin{bmatrix} e^{i\alpha_{j,n}} \cos\theta_{j,n} & e^{i \beta_{j,n}} \sin\theta_{j,n}\\ -e^{-i \beta_{j,n}} \sin\theta_{j,n} & e^{-i\alpha_{j,n}} \cos\theta_{j,n} \end{bmatrix}. \end{eqnarray} This matrix is a $U(2)$ operator ($B \in U(2)$) parametrized by the three space-time dependent Euler angles $\theta_{j,n}, \alpha_{j,n}, \beta_{j,n}$ and a space-time dependent phase $\xi_{j,n}$. The latter is relevant when it depends on time and space, i.e. when it is local. If it is global ($\xi_{j,n} = \xi$), it disappears from any observables and becomes unimportant. This occurs because when the phase is space-time dependent, it does not commute with the time and space translation operators. The matrix obeys $B \in SU(2)$ only when $\xi_{j,n} = k\pi$ for all $j,n$, with $k \in \mathbb{Z}$. Thus, this formulation is slightly more general than QW's considered in \cite{QW1,QW2,PhysRevA.77.032326} where $B \in SU(2)$ is studied. As seen in the next section, the choice $B \in U(2)$ will be important to have a general connection between mass terms of the Dirac equation and QW's. Finally, the $U(2)$ QW can also be implemented on quantum computers because the matrix $B$ is a unitary transformation: it represents the most general QW consistent with quantum computations. In the above, the amplitudes $\psi_{1,2}$ code for the probability of the quantum walker to move up (down) along the lattice site $j\in \mathbb{Z}$ at the time step $n \in \mathbb{N}$. This is a very rich structure, which has been shown to recover a variety of important quantum wave equations, as soon as the Euler angles are allowed to acquire a space-time dependence \cite{QW3}. In addition, it provides a wealth of potential algorithms for quantum computing. This was studied extensively in \cite{QW1,QW2,QW3} by analysing the continuum limit of these QW, yielding different versions of the Dirac equation. In this work, the opposite path is taken: it is shown that specific discretizations of the Dirac equation, using either a split-operator approach or the lattice Boltzmann technique, naturally lead to a QW formulation. \section{Split-operator and quantum walks} \label{sec:split} The starting point of this discussion is the 1-D Dirac equation in Majorana representation written as (in units where $c=\hbar =1$): \begin{eqnarray} \label{eq:dirac} i\partial_{t} \psi(z,t) = \left[ -i\sigma_{z} \partial_{z} + M(z,t) \right] \psi(z,t), \end{eqnarray} with the bi-spinor $\psi \in L_{2}(\mathbb{R},\mathbb{C}^{2})$. The generalized ``mass'' matrix $M$ is space and time dependent and may include contributions from the physical mass, the coupling to an electromagnetic potential or any other type of coupling. One requirement, however, is that $M$ is a Hermitian local operator without any derivatives. Generally, it can be written as \begin{eqnarray} M(z,t) &=&\mathbb{I}_{2}M_{0}(z,t) + \boldsymbol{\sigma} \cdot \mathbf{M} ,\\ &=& \mathbb{I}_{2}M_{0}(z,t) + \sigma_{x} M_{x}(z,t) + \sigma_{y} M_{y}(z,t) + \sigma_{z} M_{z}(z,t), \end{eqnarray} where $\mathbb{I}_{2}$ is the two-by-two identity matrix, $(\sigma_{i})_{i=x,y,z}$ are Pauli matrices and the coefficients $M_{0,x,y,z}$ represent the time- and space-dependent external fields which couple to the spinor. The formal solution of Eq. \eqref{eq:dirac} is given by \begin{eqnarray} \label{eq:sol} \psi(z,t) = \hat{T} \exp\left[ - \Delta t \sigma_{z} \partial_{z} - i \int_{t_{0}}^{t} M(z,t') dt' \right] \psi(z,t_{0}), \end{eqnarray} where $\hat{T}$ is the time-ordering operator, $t_{0}$ is the initial time and $\Delta t = t-t_{0}$. Using an operator splitting technique, the solution in Eq. \eqref{eq:sol} can be approximated by \cite{FIL} \begin{eqnarray} \label{eq:sol_approx} \psi(z,t) = \exp\left[ - i \Delta t M(z,t_{0}) \right] \exp\left[ - \Delta t \sigma_{z} \partial_{z} \right] \psi(z,t_{0}) + O(\Delta t^{2}). \end{eqnarray} The first exponential is a translation (streaming) operator which shifts the spinor components according to: \begin{eqnarray} \exp\left[ - \Delta t \sigma_{z} \partial_{z} \right] \psi(z,t_{0}) = \begin{bmatrix} \psi_{1}(z-\Delta t, t_{0}) \\ \psi_{2}(z+\Delta t, t_{0}) \end{bmatrix}. \end{eqnarray} This suggests to use a spatial discretization where $\Delta z = \Delta t$ (this corresponds to a Courant-Friedrichs-Lewy (CFL) condition $C = c \Delta t/\Delta z = 1$, $c$ being the speed of light) such that the translation is exact on the lattice. Eq. \eqref{eq:sol_approx} can then be written as: \begin{eqnarray} \label{eq:dirac_dis} \begin{bmatrix} \psi^{n+1}_{1,j} \\ \psi^{n+1}_{2,j} \end{bmatrix} = \exp\left[ - i \Delta t M_{j,n} \right] \begin{bmatrix} \psi^{n}_{1,j-1} \\ \psi^{n}_{2,j+1} \end{bmatrix}, \end{eqnarray} where we defined the following quantities on the lattice: $M_{j,n}:=M(n\Delta t, j\Delta z)$ and $\psi_{j}^{n}:=\psi(n\Delta t, j\Delta z)$. This last equation yields a numerical scheme to solve the Dirac equation. This numerical scheme has interesting properties, as discussed extensively in \cite{FIL,FillionGourdeau20121403,Lorin2011190}: it can be extended to higher order accuracy, it can be easily parallellized and it can be easily coded on a computer. In the following, it is also demonstrated that it is completely equivalent to the $U(2)$ QW described in the last section. Eq. \eqref{eq:dirac_dis} is in the form of Eq. \eqref{QW}. Moreover, the exponential is also a unitary matrix: \begin{eqnarray} B' := \exp\left[ - i \Delta t M_{j,n} \right] \in U(2), \end{eqnarray} and thus, there clearly exists a connection between $B$ and $B'$. They are expressed in different representation: $B$ uses the Euler angle parametrization while $B'$ is expressed in the canonical representation obtained by the exponential mapping of the Lie algebra. The latter is given explicitly by \begin{eqnarray} B' &=& \exp\left[ -i\Delta t M_{0,j,n} \right]\exp\left[ - i \Delta t \boldsymbol{\sigma} \cdot \mathbf{M}_{j,n} \right] \\ &=&\exp\left[ -i\Delta t M_{0,j,n} \right] \left[ \mathbb{I}_{2} \cos(|\mathbf{M}_{j,n}|\Delta t) - i \cfrac{\boldsymbol{\sigma} \cdot \mathbf{M}_{j,n}}{|\mathbf{M}_{i,m}|}\sin(|\mathbf{M}_{j,n}|\Delta t) \right], \end{eqnarray} where \begin{eqnarray} |\mathbf{M}_{j,n}| = \sqrt{M_{x,j,n}^{2} + M_{y,j,n}^{2} + M_{z,j,n}^{2}}. \end{eqnarray} It is well-known that parametrizations of $U(2)$ matrices are related to each other \cite{gilmore2012lie} and it can be determined that the mapping between both representations is given by \begin{eqnarray} \label{eq:rel_split1} \xi_{j,n} &=& M_{0,j,n} \Delta t ,\\ \label{eq:rel_split2} \tan (\alpha_{j,n}) &=& -\frac{M_{z,j,n}}{|\mathbf{M}_{j,n}|} \tan(|\mathbf{M}_{j,n}| \Delta t) ,\\ \label{eq:rel_split3} \tan (\beta_{j,n}) &=& \frac{M_{x,j,n}}{M_{y,j,n}}, \\ \label{eq:rel_split4} \tan (\theta_{j,n}) &=& \tan(|\mathbf{M}_{j,n}| \Delta t) \sqrt{\frac{M_{x,j,n}^{2} + M_{y,j,n}^{2}}{|\mathbf{M}_{j,n}|^{2} + M_{z,j,n}^{2} \tan^{2}(|\mathbf{M}_{j,n}|\Delta t)}}. \end{eqnarray} When $M_{0} = 0$, one recovers the $SU(2)$ QW because then, $\tilde{B} := B'|_{M_{0} = 0} \in SU(2)$. These last equations give a one-to-one correspondence between the QW formulation, characterized by the parameters $\alpha_{j,n},\beta_{j,n},\theta_{j,n},\xi_{j,n}$, and the discretization of the Dirac equation with a generalized mass term $M_{j,n}$. Therefore, the last results show that in the continuum limit, every space-time dependent QW on the line becomes a time-dependent 1D Dirac equation with a specific mass matrix. A given QW can thus be fully characterized by the relativistic dynamics of an electron coupled to a space-time dependent external field. This occurs because there is an equivalence between the discretization of the Dirac equation, based on operator splitting, and the QW formulation. Moreover, this connection may be the base for the implementation of a quantum algorithm that solves the Dirac equation on quantum computers. The operator splitting technique presented here also bears a close relationship with the QLB technique, to which we now turn. \section{Quantum lattice Boltzmann, operator splitting and Quantum Walks} \label{sec:QLB} The QLB was inspired by a direct analogy between the way the Dirac equation goes to the Schroedinger equation in the limit $v/c \to 0$, and the way that the Navier-Stokes equations of classical fluid-dynamics emerge from the Boltzmann equation in the limit of small Knudsen number, $Kn \to 0$, where $Kn=l/L$ is the ratio of the molecular mean free path to the typical macroscopic scale. In both cases, the smallness parameter controls the enslaving of the fast modes to the slow ones: {\it non-equilibrium} to {\it equilibrium} for the classical case, versus {\it excited} states to {\it ground} state in the quantum one. Of course, the quantum case shows no genuine relaxation since its dynamics is reversible. Yet, enslaving can be interpreted in the sense of fast oscillations around a local quantum equilibrium (Zitterbewegung), which average out once time is coarse-grained on a scale larger than the period of the fast oscillations. So, Zitterbewegung may be regarded as the quantum relativistic analogue of classical non-equilibrium fluctuations. Based on this analogy, QLB was formulated as a lattice Boltzmann analogue of the Dirac equation in the Majorana representation, where the streaming matrix is {\it real}. To obtain the QLB scheme, it is convenient to write the Dirac equation as \begin{eqnarray} \left[ \partial_{t} + v_{a}\partial_{z} \right]\psi_{a}(z,t) = -i\sum_{b}M_{ab}(z,t) \psi_{b}(z,t), \end{eqnarray} where $a,b$ are spinor indices and the ``microscopic velocities'' are given by $v_{1,2} = \pm 1$. This is clearly in a ``Boltzmann-like'' form with two discrete velocities and a collision term $M$. When QLB is employed in fluid mechanics to solve the continuum equations of motion, there is a family of possible numbers of discrete velocities for a given lattice \cite{0295-5075-17-6-001} and each choice yields a different numerical scheme (for instance, the 9 velocities scheme in 2-D and the 27 velocities scheme in 3-D are popular choices on square lattices \cite{PhysRevE.56.6811}). For the Dirac equation, this choice is dictated by the mathematical structure of the equation. The lattice Boltzmann equation is then written as \cite{succi2001lattice,chen1998lattice} \begin{eqnarray} \label{eq:LB_scheme_no} \psi_{i}(z + v_{a} \Delta t,t + \Delta t) = \psi_{a}(z,t) - i\Delta t\sum_{b}M_{ab}(z,t) \psi_{b}(z,t) + O(\Delta t^{2}). \end{eqnarray} As for the splitting method described in the last section, this suggests to use a space discretization where $\Delta z = \Delta t$. Using a ``naive'' approach in line with LB methods in fluid mechanics, the last equation would become \begin{eqnarray} \label{eq:LB_scheme_dis} \begin{bmatrix} \psi_{1,j+1}^{n+1} \\ \psi_{2,j-1}^{n+1} \end{bmatrix} = \begin{bmatrix} \psi_{1,j}^{n} \\ \psi_{2,j}^{n} \end{bmatrix} - i\Delta tM_{j,n} \begin{bmatrix} \psi_{1,j}^{n} \\ \psi_{2,j}^{n} \end{bmatrix}. \end{eqnarray} Then, the matrix $-iM$ on the right acts as collision operator while the streaming is executed by the left part of the equation. This scheme is derived by using the formal analogy between the Dirac equation, the Boltzmann equation and the LB technique. However, the resulting numerical method is unstable and the $L^{2}$ norm is not preserved \cite{succi2002kinetic}. It is possible, however, to recover a stable and norm-preserving method. Instead of using the ``naive'' LB equation discretization given in Eq. \eqref{eq:LB_scheme_dis}, the mass term on the right-hand side of Eq. \eqref{eq:LB_scheme_no} is discretized by using an implicit Crank-Nicolson average: \begin{eqnarray} iM(z,t) \begin{bmatrix} \psi_{1}(z,t) \\ \psi_{2}(z,t) \end{bmatrix} = \cfrac{i}{2} M_{j,n} \left\{ \begin{bmatrix} \psi_{1,j+1}^{n+1} \\ \psi_{2,j-1}^{n+1} \end{bmatrix} + \begin{bmatrix} \psi_{1,j}^{n} \\ \psi_{2,j}^{n} \end{bmatrix} \right\} . \end{eqnarray} Reporting this into Eq. \eqref{eq:LB_scheme_no}, one obtains the second order accurate QLB scheme: \begin{eqnarray} \label{eq:qlb} \begin{bmatrix} \psi^{n+1}_{1,j+1} \\ \psi^{n+1}_{2,j-1} \end{bmatrix} = T_{j,n} \begin{bmatrix} \psi^{n}_{1,j} \\ \psi^{n}_{2,j} \end{bmatrix} + O(\Delta t^{2}), \end{eqnarray} where the transfer matrix is given by \begin{eqnarray} T_{j,n} = \left[\mathbb{I}_{2} + i \cfrac{\Delta t}{2} M_{j,n} \right]^{-1} \left[\mathbb{I}_{2} - i \cfrac{\Delta t}{2}M_{j,n}\right]. \end{eqnarray} To link these results with the ones of the last sections, it is convenient to shift the spinor components by one lattice point (we let $\psi_{1,j+1} \rightarrow \psi_{1,j}$ and $\psi_{2,j-1} \rightarrow \psi_{2,j}$). Then, the QLB scheme becomes \begin{eqnarray} \begin{bmatrix} \psi^{n+1}_{1,j} \\ \psi^{n+1}_{2,j} \end{bmatrix} = T_{j,n} \begin{bmatrix} \psi^{n}_{1,j-1} \\ \psi^{n}_{2,j+1} \end{bmatrix} + O(\Delta t^{2}), \end{eqnarray} which is in the same form as the QW and the splitting method presented previously. The transfer matrix can be evaluated explicitly. It is given by \begin{eqnarray} \label{eq:coll_mat_qlb} T_{j,n} = \frac{1}{C_{j,n}} \begin{bmatrix} 1 - i\Delta t M_{z,j,n} + \frac{\Delta t^{2}}{4} M_{j,n}^{2} & -\Delta t \left(iM_{x,j,n} + M_{y,j,n} \right)\\ -\Delta t \left(iM_{x,j,n} - M_{y,j,n} \right) & 1 + i\Delta t M_{z,j,n} + \frac{\Delta t^{2}}{4} M_{j,n}^{2} \end{bmatrix}, \end{eqnarray} where $C_{j,n} = 1 + i\Delta t M_{0,j,n} - \frac{\Delta t^{2}}{4} M_{j,n}^{2}$ and $M_{j,n}^{2} = M_{0,j,n}^{2} - \mathbf{M}_{j,n}^{2}$. It can be readily verified that $T_{j,n}T_{j,n}^{\dagger} = \mathbb{I}_{2}$. As a consequence, the transfer matrix is unitary $T_{j,n} \in U(2)$, for any size of the time step and thus, conserves the $L^{2}$ norm. Moreover, just like the splitting method, there exists a correspondence with QW's because both have $U(2)$ collision matrices. The identification with the general QW in Eq. (\ref{QW}) yields \begin{eqnarray} \tan(\xi_{j,n}) &=& \cfrac{\mathrm{Im} (C_{j,n})}{\mathrm{Re} (C_{j,n})} = \cfrac{ \Delta t M_{0,j,n} }{1 - \frac{\Delta t^{2}}{4} M_{j,n}^{2}} \\ \tan(\alpha_{j,n}) &=&- \cfrac{\Delta t M_{z,j,n}}{1 + \frac{\Delta t^{2}}{4} M_{j,n}^{2}}, \\ \tan(\beta_{j,n}) &=& \cfrac{M_{x,j,n}}{M_{y,j,n}} ,\\ \tan(\theta_{j,n}) &=& - \Delta t \sqrt{\cfrac{M_{x,j,n}^{2}+M_{y,j,n}^{2}}{\left(1+ \frac{\Delta t^{2}}{4} M_{j,n}^{2} \right)^{2} + \Delta t^{2} M_{z,j,n}^{2}}}. \end{eqnarray} These relations give a correspondence between the QLB technique and the QW. They are similar to the ones for the splitting method displayed in Eqs. \eqref{eq:rel_split1} to \eqref{eq:rel_split4} and actually serve the same purpose: they allow to map a numerical scheme to the QW. The differences are obviously due to the fact that QLB is based on a LB formulation combined with a Crank-Nicolson average to insure stability while the splitting scheme separates the Dirac equation into different operators which can be integrated exactly. However, both methods share the same general structure where a streaming step is followed by a collision step. \subsection{An explicit example: the free case} The Dirac equation for the massive and free case (no external field coupling) is \begin{eqnarray} \label{eq:dirac_maj} i\partial_{t} \psi(z,t) = \left[ -i\sigma_{z} \partial_{z} + \sigma_{y} m \right] \psi(z,t), \end{eqnarray} where $M_{y} = m$ is a physical fermion mass. This representation of the Dirac equation yields real spinor components, as readily seen by writing the equation componentwise: \begin{eqnarray} \label{MAIO} (\partial_{t} + \partial_z) \psi_{1}(z,t) = -m \psi_{2}(z,t),\\ (\partial_t - \partial_z )\psi_{2}(z,t) = +m \psi_{1}(z,t). \end{eqnarray} The QLB scheme, for the massive and free case, reads as follows \cite{QLB}: \begin{eqnarray} \begin{bmatrix} \psi^{n+1}_{1,j} \\ \psi^{n+1}_{2,j} \end{bmatrix} = T_{\rm free} \begin{bmatrix} \psi^{n}_{1,j-1} \\ \psi^{n}_{2,j+1} \end{bmatrix}, \end{eqnarray} where the transfer matrix is can be obtained from Eq. \eqref{eq:coll_mat_qlb} by setting $M_{0} = M_{x} = M_{z} = 0$ and $M_{y} = m$: \begin{eqnarray} T_{\rm free}:= \begin{bmatrix} a(m) & b(m)\\ -b(m) & a(m) \end{bmatrix} = \cfrac{1}{1+\frac{\Delta t^{2}m^{2}}{4}} \begin{bmatrix} 1-\cfrac{\Delta t^{2}m^{2}}{4} &- m\Delta t \\ m \Delta t & 1-\cfrac{\Delta t^{2}m^{2}}{4} \end{bmatrix}, \end{eqnarray} It is readily seen that $a(m)$ and $b(m)$ are second-order Pade'-like approximants of $\cos(m)$ and $\sin(m)$, respectively. This is the natural consequence of the implicit time-marching (Crank-Nicolson) scheme as applied to the (1+1) Dirac equation in Majorana form. For a free particle, the correspondence to QW is particularly simple and is given by \begin{eqnarray} \label{eq:map_QW1} \xi = \alpha=\beta=0, \\ \label{eq:map_QW2} \tan(\theta) =- \frac{m\Delta t}{1-\frac{\Delta t^{2}m^{2}}{4}}. \end{eqnarray} This mapping of the QLB to the QW is exact for any value of $m$. \section{Prospects for quantum simulation} \label{sec:quantum_sim} The numerical methods described in Sections \ref{sec:split}-\ref{sec:QLB} can be straightforwardly implemented on classical computers. However, the stream-collide structure of these numerical scheme makes them suitable for an efficient implementation on quantum computers as well. In particular, they can be written as: \begin{eqnarray} \label{QW_gen} \psi^{n+1}_{j} = B_{j,n} S_{j} \psi^{n}_{j}, \end{eqnarray} where $S_{j}$ is the shift operator, defined as: \begin{eqnarray} \label{shift_op} S_{j} \psi^{n}_{j} = \begin{bmatrix} \psi^{n}_{1,j-1} \\ \psi^{n}_{2,j+1} \end{bmatrix}. \end{eqnarray} This shift operator is a unitary operation and it can be realized experimentally by using fundamental quantum gates \cite{PhysRevA.72.062317,PhysRevA.77.032326}. The rotation operator $B_{j,n}$ belongs to $U(2)$ and therefore can also be realized by these quantum gates, as any other unitary transformations \cite{kaye2006introduction}. Therefore, it is possible to map the numerical method to quantum walk, which can be implemented efficiently on quantum computers. This mapping is possible because each step of the scheme is a unitary transformation: this makes these schemes norm-preserving and sets the link with quantum computations. The latter would be particularly useful for the study of relativistic quantum systems where a time-dependent solution of the Dirac equation is required, such as in very high intensity laser physics \cite{RevModPhys.84.1177} or graphene physics \cite{geim2007rise}. Another subject of major interest for future research is the extension of the QLB methodology to quantum many body systems and quantum field theory, two paramount sectors of modern physics which are particularly exposed to the limitations of classical (non-quantum) electronic computing. Progress in this direction depends on the ability to replace the quantum wavefunction by the corresponding second-quantized quantum operators, and show that the dynamics of the second-quantized QLB scheme still preserves the appropriate equal-time commutation relations. Preliminary efforts along this line have been developed in \cite{1751-8121-40-26-F07} in $1+1$ dimensions. Extensions to strongly non-linear field theories in $d>1$ remain to be explored. As to quantum-many body problems, LB-like methods have been recently adapted to electronic structure simulations \cite{PhysRevLett.113.096402} In this work, a classical LB scheme is employed to solve the Kohn-Sham equations of density functional theory in the form diffusion-reaction equations in imaginary time. Allied QLB schemes could prove very useful to solve the corresponding real-time quantum many-body transport problems within the framework of time-dependent density functional theory. Finally, we wish to point out the intriguing possibility of realizing both quantum and classical LB schemes on quantum analogue simulators, as recently explored in \cite{mezzacapo2015quantum}. \section{Quantum equilibria} \label{sec:qe} In view of quantum computing implementations, it is of interest to cast the Dirac equation in the form of a propagation-relaxation process, where the collision matrix is now interpreted as a scattering process, relaxing the spinor component around a local quantum equilibrium. For this purpose, it is useful to reconsider the 1-D Dirac equation in the form: \begin{eqnarray} \left[ \partial_{t} + v_{a}\partial_{z} \right]\psi_{a}(z,t) = -i\sum_{b}M_{ab}(z,t) \psi_{b}(z,t). \end{eqnarray} Then, it is formally possible to define a local equilibrium as: \begin{eqnarray} \label{LEQ} \psi_{\rm eq}(z,t) :=U\psi(z,t), \end{eqnarray} where $U$ is a unitary matrix that depends on $M$. This transformation is chosen to recast the Dirac equation in relaxation form: \begin{eqnarray} \left[ \partial_{t} + \sigma_{z}\partial_{z} \right]\psi(z,t) = - \Omega \left[ \psi(z,t) - \psi_{\rm eq}(z,t) \right], \end{eqnarray} where $\Omega = iM [\mathbb{I} + U]^{-1}$. The explicit value of $U,\Omega$ is not unique but a convenient choice is $U = e^{iM \tau}$. In this vests, the Dirac equation looks formally like a linear Boltzmann equation for two-component models in the single relaxation time approximation \cite{succi2001lattice}. Therefore, it can be interpreted as a propagation-relaxation process in imaginary time, whereby collisions, implemented by the scattering operator $\Omega$, drive oscillations around the equilibrium distribution $\psi_{\rm eq}$. By defining a post-collision wavefunction as: \begin{equation} \label{POST} \psi^{'}(z,t) := \left( 1 - \Delta t \Omega \right) \psi(z,t) + \Delta t \Omega \psi_{\rm eq}(z,t), \end{equation} the Dirac equation takes the most compact form \begin{equation} \label{QCOMP} \psi(z+v_a \Delta t, t+\Delta t) = \psi^{'}(z,t). \end{equation} This is particularly suitable to quantum computing implementations in the form of a classical stream-collide dynamics. The collision (relaxation) gate transforms the pre-collisional spinor $\psi$ into the post-collisional state $\psi'$, and the streaming gate moves the post-collisional spinor to its destination location $z \pm \Delta z$. Both operations are unitary and can be encoded in logical gates for quantum computing purposes \cite{MEY,BOG}. The expression (\ref{LEQ}) shows that the local equilibria is a linear function of the actual wavefunction $\psi$, hence itself a function of space and time. The question is: will the actual wavefunctions ever reach this moving target, i.e, $\psi = \psi_{\rm eq}$? Based on its definition, this can only occur once $\psi_{\rm eq}$ lies in the null-space of the scattering matrix $M$, namely: \begin{equation} \label{EQUIL} M \psi_{\rm eq} = 0. \end{equation} For the case of a free massive particle, it can be checked explicitly that the only solution is the trivial vacuum $\psi_{\rm eq} \equiv 0$. This means that the spinorial wavefunction is a superposition of (both slow and fast) zero-average oscillations around a local equilibrium which, consistently with the reversible nature of quantum mechanics, is actually never attained. Although this remains to be checked in detail, we conjecture that the same holds true for the case of a massive particle in an external potential, because in this case the Dirac equation is still linear. Based on (\ref{EQUIL}), the condition for the local equilibrium to depart from a trivial vacuum is that the matrix $M$ be singular, i.e. the local equilibrium is a zero-mode of the scattering matrix. Non-trivial quantum zero-modes, $\psi_{\rm eq} \neq 0$, may indeed arise for {\it non-linear} quantum wave equations, such as the Gross-Pitaevski or the mean-field version of the Nambu-Jona-Lasinio model, to be dicussed shortly. A non-trivial local equilibrium would then signal a spontaneously broken symmetry, which is indeed the distinctive trait of the aforementioned non-linear quantum wave equations. Even though the notion of quantum equilibrium remains purely formal in nature, it is argued that it might nonetheless facilitate quantum computing implementations based on the compact expressions (\ref{POST}) and (\ref{QCOMP}). This stands out a very interesting topic for future research. \section{QLB in curved space-time} \label{sec:QLBc} Quantum walks have been shown to map into Dirac-like equations in curved space as well by evaluating the continuum limit of certain QW's \cite{PhysRevA.88.042301,QW3}. Here, in the same spirit as other sections, a QW structure is obtained by discretizing the Dirac equation in curved space time using a QLB-like approach. However, because the wave function propagates on a curved manifold, the structure of the resulting QW is different from Eq. \eqref{QW} and should include a residency matrix that corrects the streaming step, which is strictly valid only in flat space. This is different from the result obtained in \cite{PhysRevA.88.042301,QW3}. The Dirac equation in a static (1+1) curved space writes as: \begin{equation} \label{DIRACG} \gamma^a [e^{\mu}_a (\partial_{\mu} -i A_{\mu})]\psi + \frac{1}{2}g^{-1/2} \partial_{\mu} (g^{1/2} e^{\mu}_a) \psi)] =-i m \psi, \end{equation} where $a=0,1$ and $\mu=t,z$. In the above $e^{\mu}_a$ is the two-dimensional vierbein (zweibein) relating the components of the locally tangent Minkowski space basis $(e_0,e_1)$, described by the metric $\eta_{ab}$, to the global space basis $(e_t,e_z)$, described by the general metric $g_{\mu \nu}$. Also, $g:= \mathrm{det} (g_{\mu \nu})$ is the determinant of the metric tensor. The general form of the corresponding partial differential equation is \begin{equation} \partial_t \psi + \sigma_{z}A(z)\partial_z \psi = Q(z,t) \psi, \end{equation} with $A(z) \in \mathbb{R}$ a function of space and $Q(z,t)$ the two-by-two gravitational collision matrix associated with Eq. (\ref{DIRACG}). This becomes a hyperbolic system of equations where the advection speed $A(z)$ should not be confused with the vector electrodynamic potential. Since the advection term is heterogeneous, a strict QLB structure, i.e. streaming along constant lightcones $\Delta z= \mp c \Delta t$, is no longer viable. Here, the situation is very similar to classical LB schemes on non-uniform grids (unsurprisingly, since in dimension $D=1$, gravity is basically a stretching of the metric). For this purpose, several finite-volume LB (FVLB) schemes have been formulated, whose main outcome is that the streaming operator is no longer a diagonal matrix with eigenvalues $\pm 1$, as required by the formal QLB structure \cite{FVLB,nannelli1992lattice}. In this respect, a finite-volume QLB scheme for the Dirac equation in curved space can be obtained by writing the Dirac equation as \begin{equation} \partial_t \psi +\partial_z \left[ \sigma_{z}A(z) \psi \right] = Q'(z,t) \psi, \end{equation} with $Q'(z,t) := Q(z,t)+\partial_{z} A(z)\sigma_{z}$. Then, this equation is integrated over a control volume $\mathcal{V}$ (enclosed by a surface $\mathcal{S}$), extending from $(j-\frac{1}{2})\Delta z$ to $(j+\frac{1}{2})\Delta z$. Finally, applying a Crank-Nicolson time-marching and combining with upwind finite-differences, the discretized equation is given by \begin{eqnarray} \psi_{1,j}^{n+1}-\psi_{1,j}^{n} + \frac{1}{c}\oint A\psi_{1} d\mathcal{S} &=& \frac{1}{2c} (Q'_{11,j-1}\psi_{1,j-1}^{n}+Q'_{11,j}\psi_{1,j}^{n+1}) \nonumber \\ && + \frac{1}{2c} (Q'_{12,j+1}\psi_{2,j+1}^{n}+Q'_{12,j}\psi_{2,j}^{n+1}),\\ \psi_{2,j}^{n+1}-\psi_{2,j}^{n} - \frac{1}{c}\oint A\psi_{2} d\mathcal{S} &=& \frac{1}{2c} (Q'_{22,j+1}\psi_{2,j+1}^{n}+Q'_{22,j}\psi_{2,j}^{n+1}) \nonumber \\ && + \frac{1}{2c} (Q'_{21,j-1}\psi_{1,j-1}^{n}+Q'_{21,j}\psi_{1,j}^{n+1}), \end{eqnarray} where $c := \Delta z/\Delta t$ is the uniform lattice light speed (or CFL condition). The boundary integrals are given by \begin{eqnarray} \oint A\psi_{1,2} d\mathcal{S} = [A \psi_{1,2}]_{j+\frac{1}{2}} - [A \psi_{1,2}]_{j-\frac{1}{2}}, \end{eqnarray} an thus, require an interpolation from the cell centers $j$, to the north and south boundaries at $j \pm 1/2$, respectively. Following a common practice in finite-volume formulations of hyperbolic problems, the flux terms are approximated by \cite{leveque2002finite}: \begin{eqnarray} \left[A \psi_{1}\right]_{j+\frac{1}{2}} &=& A_{j} \psi_{1,j}^{n}, \\ \left[A \psi_{1}\right]_{j-\frac{1}{2}} &=& A_{j-1} \psi_{1,j-1}^{n} ,\\ \left[A \psi_{2}\right]_{j+\frac{1}{2}} &=& A_{j+1} \psi_{2,j+1}^{n}, \\ \left[A \psi_{2}\right]_{j-\frac{1}{2}} &=& A_{j} \psi_{2,j}^{n} . \end{eqnarray} As a result: \begin{eqnarray} \left(1- \frac{Q'_{11,j}}{2c} \right)\psi_{1,j}^{n+1} - \frac{Q'_{12,j}}{2c}\psi_{2,j}^{n+1} &=& \left(1 - \frac{A_{j}}{c} \right) \psi_{1,j}^{n} +\left(\frac{Q'_{11,j-1}}{2c} + \frac{A_{j-1}}{c} \right) \psi_{1,j-1}^{n} \nonumber \\ && + \frac{Q'_{12,j+1}}{2c} \psi_{2,j+1}^{n},\\ \left(1- \frac{Q'_{22,j}}{2c} \right)\psi_{2,j}^{n+1} - \frac{Q'_{21,j}}{2c}\psi_{1,j}^{n+1} &=& \left(1 - \frac{A_{j}}{c} \right) \psi_{2,j}^{n} +\left(\frac{Q'_{22,j-1}}{2c} + \frac{A_{j+1}}{c} \right) \psi_{2,j-1}^{n} \nonumber \\ && + \frac{Q'_{21,j+1}}{2c} \psi_{1,j+1}^{n}. \end{eqnarray} It is readily seen that this reduces to a standard QLB in the limit of a uniform grid, when $A_{j}/c =1$. In this case the streaming is diagonal with speed $\pm c$ and the spinors at $(j,n+1)$ are connected to the corresponding spinors at $(j \pm 1,n)$ by a local $2 \times 2$ matrix, which can be readily inverted to deliver a fully explicit map. However, when $A_{j}/c \ne 1$, the spinors at $(j,n)$ also enter the map, so that local inversion delivers a slightly more elaborated structure, namely: \begin{eqnarray} \psi^{n+1}_{j} = (R + TS)\psi^{n}_{j} , \end{eqnarray} which can be written more explicitly as \begin{eqnarray} \label{GQLB} \begin{bmatrix} \psi^{n+1}_{1,j} \\ \psi^{n+1}_{2,j} \end{bmatrix} = R \begin{bmatrix} \psi^{n}_{1,j} \\ \psi^{n}_{2,j} \end{bmatrix} + T \begin{bmatrix} \psi^{n}_{1,j-1} \\ \psi^{n}_{2,j+1} \end{bmatrix}. \end{eqnarray} In the above $T$ is the local $2 \times 2$ transfer matrix including collisions, $S$ is the streaming operator and $R$ the local residency matrix, expressing the fraction of spinors which are left in the cell centered about $z$ as the quantum system advances from $t$ to $t+\Delta t$. This is depicted in Fig. \ref{fig:trans_res}. \begin{figure} [scale=0.5]{gqlb.jpg} \caption{Sketch of the transfer and residency matrix elements. $T_{11}$ is the fraction of up-moving spinor jumping from $j-1$ at time $n-1$ to $j$ at time $n$, while $R_{11}$ is the fraction left in $j-1$. $T_{22}$ and $R_{22}$ bear the same meaning for the down moving spinor. } \label{fig:trans_res} \end{figure} Clearly, the residency matrix vanishes in the case of a uniform mesh, i.e. no gravity. The mapping (\ref{GQLB}) represents the ``gravitational'' QLB. The detailed expressions of the streaming and residency matrices depend on the specific form of the metric tensor and associated vierbeins. Moreover, this analysis concentrates on the mathematical structure (streaming and collision steps) of the resulting scheme rather than on its numerical properties (convergence, stability, etc). These topics shall make the object of a future publication. \section{Multi-dimensions} \label{sec:multiD} The discretization presented in this work extends to the $D+1$ dimensional case by applying the notion of operator splitting. This implies the inclusion of a new dynamic step which is entirely quantum: namely a "rotation", designed so as to keep the spin aligned with the momentum along each of the three spatial directions. Schemes using this strategy can be found in \cite{QLB,FIL} and in \cite{FillionGourdeau20121403} for higher order splittings. It might be that such rotation is not needed by formulating the Dirac equation as a random walk on other lattice with more natural topologies (the diamond) lattice. The QLB-QW equivalence in multi-dimensions will be discussed in a future publication. However, to demonstrate the strength of the numerical schemes presented here and to show some possible applications for quantum computing, numerical results in 2-D are presented in the following. \subsection{Numerical results} As an example of possible applications of QLB scheme, we present two representative simulations: Klein tunnelling in the presence of random impurities and Dirac equation with Nambu-Jona-Lasinio (NJL) interactions in $2+1$ space-time dimensions. Details on the numerical methods used to obtain these results are given in \cite{QLB,FIL,nonlinear_dirac}. Also, these results are not completely new as similar systems have been studied in \cite{FIL,RANDOM}. \subsubsection{Graphene with random impurities} In the first numerical test, the propagation of a Gaussian wave packet through a graphene sample with randomly distributed impurities is simulated \cite{RANDOM}. In Ref. \cite{RANDOM}, simulations are performed for different values of the impurity concentration and the potential barrier, in order to provide an estimate of the effect of impurity concentration on the conductivity of the graphene sample. In \figref{imp05_v50}, we report some representative snapshots of the first $1800$ time steps of the simulation, at an impurity percentage$=0.5\%$ and $V=50$ MeV. A lattice of size $2048\times 512$ cells is used and the cell size is chosen to be $\Delta z = 0.96$ nm, while the spreading of the initial Gaussian wave packet is $\sigma = 48$ (in numerical units), leading to a Fermi frequency $k_F = 0.117$ ($80$ MeV in physical units). In this simulation, a fully relativistic particle ($m=0$) is considered. From \figref{imp05_v50}, we can see that the wave packet is scattered by the impurities, giving rise to a plane front out of the initial Gaussian configuration. As a consequence of the randomness induced in the wave function by the disordered media, there is a momentum loss and therefore the motion of the wave packet is found to experience a corresponding slow down. It is also found that the wave packet takes more time to regroup as the impurity concentration and impurity potential are increased. \begin{figure} \centering [scale=0.20]{v50_imp05_00.png} \\ \vspace{0.2cm} [scale=0.20]{v50_imp05_03.png} \\ \vspace{0.2cm} [scale=0.20]{v50_imp05_05.png} \\ \vspace{0.2cm} [scale=0.20]{v50_imp05_06.png} \caption{Wave packet density $\rho$ at times = $0$, $900$, $1500$, and $1800$ (lattice units) for a Gaussian wave packet propagating through a graphene sample with randomly distributed impurities. The simulation is performed with impurity percentage $C=0.5\%$ and impurity potential $V = 50$ MeV.} \label{fig:imp05_v50} \end{figure} \subsubsection{Nambu-Jona-Lasinio interaction} As a second example, we present a $2+1$ space-time simulation of the Dirac equation with a Nambu-Jona-Lasinio (NJL) interaction \cite{NJL1}. The Dirac equation with a NJL interaction term driven by the coupling parameter $g$, reads as follows \begin{equation} \label{eq:DiracNJL} (i \gamma^{\mu} \partial_{\mu} -m ) \psi + g ( (\bar \psi \psi) \psi + (\bar \psi \gamma^5 \psi) \gamma^5 \psi) = 0, \end{equation} where $\psi = (\psi_1, \psi_2, \psi_3, \psi_4)^T$, $\gamma^5 \equiv i \gamma^0 \gamma^1 \gamma^2 \gamma^3$ and $\bar \psi = \psi^{\dag} \gamma^0$. This model represents a paradigm for dynamic mass acquisition via spontaneous symmetry breaking due to the non-linear interactions. Let us consider an initial condition given by the following Gaussian minimum-uncertainty wave packet: \begin{equation} G_0(z,y) = (2 \pi \sigma^2)^{-1/2} \exp(-\frac{z^2+y^2}{4 \sigma^2}), \end{equation} centered about $(z,y)=(0,0)$, with initial width $\sigma$. Let $k_z$ and $k_y$ be the initial energy of the wave packet and impose the following initial condition: \begin{equation} \label{eq:IC_2d} \begin{split} u_1(z,y) = u_2(z,y) &= C_u G_0(z,y) \exp(i(k_z z + k_y y)), \\ d_1(z,y) = d_2(z,y) &= C_d G_0(z,y) \exp(-i(k_z z + k_y y)), \end{split} \end{equation} where coefficients $C_u$ and $C_d$ obey the condition $2 C_u^2 + 2 C_d^2 = 1 $, so that $\rho = |\psi_1|^2 + |\psi_2|^2 + |\psi_3|^2 + |\psi_4|^2 = |G_0|^2$.\\ A grid size of $N_z \times N_y = 1024^2$ elements is used and the initial wave packet spread is set at $\sigma=48$, a fully relativistic particle ($m=0$) is considered.\\ In these simulations, we impose $g=0$ and $g=1000$ and vary the initial energy of the wave packet $k\equiv k_z = k_y$ in order to inspect the effect of this parameter on the wave packet separation, which, in turn, informs on the effective mass acquired by the up and down propagating modes. In \figref{varying_k}, the wave function density at time $t=200$ for $k=0.004$, $0.04$ and $0.4$ is shown for $g=0$ and $g=1000$, respectively. The figure shows that sufficient energy, $k>0.004$, is needed to observe the splitting of the wavepacket. The effects of non-linear interactions, fringes and distortions, are also well visible in the right column of Figure 3, A quantitative analysis in the one-dimensional case led to satisfactory agreement with asymptotic solutions for the dynamic mass as a function of the interaction strength $g$. A similar analysis in two spatial dimensions remains to be developed. \begin{figure}[htbp] \centering $g=0$ \hspace{2.5cm} $g=1000$\\ \vspace{0.2cm} [scale=0.25]{g0_k004.png} [scale=0.25]{g500_k004.png}\\ \vspace{0.2cm} [scale=0.25]{g0_k04.png} [scale=0.25]{g500_k04.png}\\ \vspace{0.2cm} [scale=0.25]{g0_k4.png} [scale=0.25]{g500_k4.png}\\ \caption{Wave packet density $\rho = |\psi_1|^2 + |\psi_2|^2 + |\psi_3|^2 + |\psi_4|^2$ for the scheme solving Dirac equation with NJL interaction. These snapshots are taken at time $t=200$ for $k=0.004$, $0.04$ and $0.4$ (from top to bottom). The figure shows how the initial energy affects the separation phenomenon. In particular, at low energy, $k=0.004$, no splitting of the wavepackest is observed. } \label{fig:varying_k} \end{figure} \section{Summary and outlook} \label{sec:summary} Summarizing, we have reviewed discretizations of the Dirac equation and described their mapping into QW's. These relations may allow the solution of the Dirac equation on quantum computers. In the first part, a general argument is given, using the operator splitting method. Then, the QLB scheme is studied within the same perspective and a similar relation is found. We have also shown that a similar structure remains in curved space, using a scheme based on a finite volume formulation, with the important caveat that the exact nature of the streaming operator, typical of QLB, is no longer preserved. Rather, one sees the appearance of the residency matrix, which characterizes the fraction of spinor which is left in the cell after one step in the time evolution. This scheme, along with its generalization to many dimensions, will be studied in future work. \begin{backmatter} \section*{Competing interests} The authors declare that they have no competing interests. \section*{Author's contributions} SS had the idea to link QLB to QW, and to develop a QLB-like method for the curved-space Dirac equation. He also participated in the calculation and development of the numerical methods. FFG introduced the operator splitting method and participated in the calculations and development of the numerical methods. SP developed the multi-dimensional extensions and performed the numerical calculations. All authors read and approved the final manuscript. \section*{Acknowledgements} One of the authors (SS) is very grateful to F. Debbasch for introducing him to the notion of quantum walks. He also wishes to express gratitude to Marcelo Alejandro Forets for organizing a very informative workshop on Relativistic Quantum Walks, where the ideas behind this paper have been first drafted out. \bibliographystyle{bmc-mathphys}
{ "timestamp": "2015-04-14T02:14:13", "yymm": "1504", "arxiv_id": "1504.03158", "language": "en", "url": "https://arxiv.org/abs/1504.03158" }
\section{Introduction} In the past few years, Twitter has been used as a conference backchannel platform in academic events targeting the expansion of the community communication and participation \cite{ebner2009introducing,ross2011enabled}. Attendees using Twitter are generally involved in note taking, sharing resources and reporting individual real-time reactions to events, covering both conference presentations and conference social activities. This supports scholars' activities such as disseminating their work and engaging general public and newcomer scientists into the research communities \cite{mitchell2007places}. It is a common practice in research conferences to use hashtags in the tweets to identify that particular event (e.g. \#hypertext2015). International academic conferences have a diverse community, with different cultural backgrounds and languages. Thus, it is interesting to analyze how language affects the generation of content and interaction among attendees. Such study would allow to observe how integrated a research community is, as well as to identify its blind spots in communication. This can be of special interest to conference organizers not only to evaluate communication but also to have an overview of their audiences. Despite the research done in the past \cite{dyl43, Letierce2010,WenCSCW14,WenHT14} on academic conferences, little has been done on language communities and the communication established among them. To bridge this gap, we explore the language of 7M tweets posted by 18K users during 26 Computer Science conferences over five years (one week before and after for each conference). We group users by the language(s) they use to tweet in order to explore how different language communities interact. Although English is expected to be the lingua franca of many international events, we wonder to what extent people use other languages on Twitter during academic conferences. \textbf{Research Questions.} Overall, our study was driven by the following research questions: \begin{itemize}[itemsep=0pt,parsep=0pt,topsep=0pt, partopsep=0pt] \item \textbf{RQ1. Conference attendees' languages}: To what extent do people tweet in other languages beyond English in conferences? \item \textbf{RQ2. Interactions between lingua groups}: How do lingua groups interact with each other? \item \textbf{RQ3. Effect of language}: Is there an effect of language or lingua group over online user interaction? \end{itemize} \textbf{Main results.} We find that most people tweet only in English (61\%) in conferences but most of the tweets are posted by multilingual users and their participation varies significantly across conferences. Additionally, we observe that \emph{English monolinguals} receive most of the attention and interact more within their group while the opposite is observed with most of the members from other language communities. Finally, we show that people who do not interact other attendees are mostly monolinguals, while people who interact with others present more language diversity, by a balanced distribution of monolinguals and multilinguals. \vspace{-3mm} \section{Dataset} \label{dataCollections} We selected a representative set of conferences in Computer and Information Science from the CORE Computer Science Conference Ranking list\footnote{http://www.core.edu.au/index.php/conference-rankings}; 26 conferences active in Twitter every year between 2009 and 2013. Furthermore, we manually checked that the selected conferences did not overlap with other events. To retrieve the tweets from these events in previous years, we used the Topsy API and crawled tweets containing the corresponding official hashtag (e.g., \#chi12, \#www2009) within a two-week time window around the dates each conference took place (from seven days before and until seven days after the conference ended). We found that these tweets were posted by 22,021 participants in total. We acknowledge that these participants also interact with others without the conference hashtag and because of this we also crawled their timeline tweets during the same period. In total, we obtained \emph{6,993,693} tweets. \textbf{Language Identification.} To identify the language of the tweets, we removed all URLs, mentions and hashtags. Then we set a minimum threshold of \emph{4} remaining words in the tweets to identify their language. The language detection task was performed with a professional language tool provided by Yahoo Labs Barcelona that is able to identify over 40+ languages as in \cite{Poblete2011}. Following this process we were left with 6,184,775 tweets (88\% from initial sample) with an identified language. Finally, we proceeded to model each user by the three most frequent languages they used to tweet (setting a minimum threshold of \emph{5} tweets per language). Consequently, we found \emph{266} lingua groups with \emph{18,347} users using at least three different languages in their tweets. \section{Results}\label{RQs} \begin{table}[t!] \centering \scalebox{0.8}{ \begin{tabular}{|l|r|r|r||r|r|r|r|} \cline{5-8} \multicolumn{4}{c||}{} &\multicolumn{4}{c|}{\textbf{Diversity percentage}} \\ \hline &\multicolumn{3}{c||}{Lingua groups} &\multicolumn{2}{c|}{General} &\multicolumn{2}{c|}{Reciprocated} \\ \hline Conference & \multicolumn{1}{l|}{1-ling} & \multicolumn{1}{l|}{2-ling} & \multicolumn{1}{l||}{$\geq$ 3-ling} & \multicolumn{1}{l|}{MT} & \multicolumn{1}{l|}{RT} & \multicolumn{1}{l|}{MT} & \multicolumn{1}{l|}{RT} \\ \hline AAAI & 81\% & 8\% & 11\% & 34\% & 29\% & 16\% & 20\%\\ \hline ACMMM & 52\% & 38\% & 11\% & 53\% & 53\% & 48\% & 41\%\\ \hline CHI & 76\% & 17\% & 7\% & 49\% & 48\% & 40\% & 30\%\\ \hline CIKM & 66\% & 24\% & 10\% & 54\% & 54\% & 44\% & 40\%\\ \hline ECIR & 58\% & 27\% & 15\% & 55\% & 57\% & 43\% & 31\%\\ \hline ECIS & 57\% & 31\% & 12\% & 46\% & 44\% & 24\% & 0\%\\ \hline HT & 64\% & 26\% & 10\% & 52\% & 53\% & 37\% & 29\%\\ \hline ICIS & 67\% & 26\% & 8\% & 44\% & 41\% & 19\% & 16\%\\ \hline ICML & 75\% & 17\% & 8\% & 52\% & 55\% & 20\% & 21\%\\ \hline ICMT & 51\% & 30\% & 20\% & 70\% & 62\% & 31\% & 20\%\\ \hline ICSE & 58\% & 32\% & 10\% & 47\% & 46\% & 40\% & 47\%\\ \hline ISMAR & 64\% & 28\% & 8\% & 39\% & 37\% & 19\% & 21\%\\ \hline IUI & 62\% & 21\% & 17\% & 59\% & 58\% & 45\% & 44\%\\ \hline KDD & 73\% & 18\% & 10\% & 53\% & 50\% & 38\% & 37\%\\ \hline MobileHCI & 66\% & 23\% & 11\% & 50\% & 47\% & 48\% & 39\%\\ \hline NIPS & 74\% & 19\% & 8\% & 46\% & 48\% & 25\% & 20\%\\ \hline SIGGRAPH & 77\% & 16\% & 7\% & 38\% & 32\% & 24\% & 19\%\\ \hline SIGIR & 68\% & 21\% & 12\% & 56\% & 58\% & 36\% & 39\%\\ \hline SIGMOD & 72\% & 23\% & 6\% & 58\% & 53\% & 19\% & 12\%\\ \hline SLE & 59\% & 32\% & 9\% & 58\% & 58\% & 40\% & 40\%\\ \hline UBICOMP & 71\% & 21\% & 9\% & 59\% & 57\% & 55\% & 44\%\\ \hline UIST & 71\% & 24\% & 5\% & 60\% & 58\% & 35\% & 32\%\\ \hline VLDB & 67\% & 26\% & 7\% & 56\% & 53\% & 29\% & 21\%\\ \hline WSDM & 65\% & 22\% & 13\% & 61\% & 60\% & 48\% & 39\%\\ \hline WWW & 52\% & 32\% & 15\% & 52\% & 51\% & 43\% & 40\%\\ \hline XP & 58\% & 35\% & 7\% & 53\% & 52\% & 51\% & 54\%\\ \hline \end{tabular} } \vspace{-1mm} \caption{Percentage of monolinguals, bilinguals and multilinguals tweeting in each conference between 2009-2013 (col 2-4). Diversity percentage for different type of interactions (col 5-8) .} \label{tab:tbl_languages_number_total} \vspace{-3mm} \end{table} \setlength\tabcolsep{1mm} \begin{table}[t!] \centering \scalebox{0.8}{ \begin{tabular}{|l|r|r|r|} \hline Lingua & Users & Tweets & (tweets/user) \\ \hline en & 61.31\% & 29.14\% & 179.50\\ \hline en-fr & 6.46\% & 3.57\% & 208.79\\ \hline en-es & 3.79\% & 2.39\% & 238.14\\ \hline de-en & 2.18\% & 1.63\% & 281.89\\ \hline en-nl & 2.15\% & 1.50\% & 263.54\\ \hline fr & 2.00\% & 0.26\% & 49.05\\ \hline en-ja & 1.92\% & 3.55\% & 696.92\\ \hline en-es-pt & 1.62\% & 4.06\% & 944.93\\ \hline en-pt & 1.44\% & 0.35\% & 92.65\\ \hline en-it & 1.36\% & 1.56\% & 434.83\\ \hline nl & 1.36\% & 0.16\% & 43.33\\ \hline ja & 1.09\% & 1.09\% & 377.89\\ \hline en-es-fr & 0.93\% & 8.89\% & 3609.91\\ \hline ca-en-es & 0.79\% & 2.14\% & 1016.69\\ \hline en-ko & 0.57\% & 0.51\% & 340.24\\ \hline es & 0.52\% & 0.06\% & 42.92\\ \hline Others& 10.52\% &39.14\% & - \\\hline \end{tabular} } \vspace{-1mm} \caption{Statistics of top lingua groups (more than 90 users). We show the percentage of users belonging to each \emph{lingua} (Users), the percentage of tweets (Tweets) and the engagement (tweets/user).}\label{tbl_stats_linguas} \vspace{-1mm} \end{table} \textbf{RQ1. To what extent do people tweet in other languages beyond English across conferences?} \label{RQ1} As expected, we found that the majority of tweets are written in English (76\%). Nevertheless, due to the multicultural nature of conferences, there is a non-negligible 24\% of tweets in languages different than English (en), such as French (fr), Spanish (es), German (de) and Japanese (jp). Furthermore, we found in our dataset that many people post tweets in more than a single language. We quantify this observation in Table \ref{tab:tbl_languages_number_total} that shows the percentage of users who tweet in a single language (1-lingua), in two languages (2-lingua) or three or more ($\geq$ 3-lingua) in each conference. We observe that the percentage of people who tweet in two or more languages goes from close to 20\% (AAAI, SIGGRAPH) up to around 50\% (ACMM, ICMT, WWW) showing important differences among conferences in the distribution of users who tweet in one or more languages. Based on these results, rather than analyzing languages as isolated groups, we studied the lingua groups as communities of people who speak either one or more languages. Table \ref{tbl_stats_linguas} describes the top language communities by number of users. The table shows that the majority of users are classified as \emph{English monolinguals} (61\%) but interestingly only produce (29\%) of all tweets with a moderate engagement (only 179.5 tweets per user). In contrast, we see that users of multilingual groups are the most engaged (3609.9 tweets/user for en-es-fr, 1016.7 for ca-en-es, and 944.93 for en-es-pt). These results lead us to further analyze specific lingua groups to unveil the interaction between language communities and their online behavior. \setlength\tabcolsep{1mm} \begin{table}[t] \centering \scalebox{0.8}{ \begin{tabular}{ |l|r|r|l|r|r|} \hline \multicolumn{6}{|c|}{\textbf{General}} \\ \hline \multicolumn{3}{|C{2cm}|}{\textbf{Mentions (148,184)}} & \multicolumn{3}{C{2cm}|}{\textbf{Retweets (91,523)}} \\ \hline Ling.&Att.& out-links& Ling.&Att.& out-links \\ \hline en & 67\% & 37\% & en & 66\% & 37\% \\ \hline en-fr & 7\% & 56\% & en-fr & 7\% & 54\%\\ \hline de-en & 3\% & 74\% & de-en & 3\% & 78\% \\ \hline en-es & 3\% & 79\% & en-es & 3\% & 80\% \\ \hline en-ja & 2\% & 35\% & en-ja & 2\% & 42\% \\ \hline \hline \multicolumn{6}{|c|}{\textbf{Reciprocated}} \\ \hline \multicolumn{3}{|C{2cm}|}{\textbf{Mentions (25,956)}} & \multicolumn{3}{C{2cm}|}{\textbf{Retweets (6,496)}} \\ \hline Ling.&Att.& out-links& Ling.&Att.& out-links \\ \hline en & 57\% & 48\% & en & 51\% & 52\%\\ \hline en-fr & 8\% & 52\% & en-fr & 8\% & 44\%\\ \hline de-en & 4\% & 72\% & en-es & 5\% & 61\%\\ \hline en-es & 4\% & 71\% & de-en & 4\% & 74\%\\ \hline en-nl & 3\% & 71\% & en-nl & 3\% & 70\%\\ \hline \end{tabular} } \vspace{-3mm} \caption{\emph{Most popular linguas}: lingua groups ordered by the attention they receive across all conferences. The \emph{out-link} column represents the percentage of interactions going to other lingua groups.}\label{tbl_mostpop_lang} \vspace{-5mm} \end{table} \textbf{RQ2. How do lingua groups interact with each other?} \label{RQ2} \begin{figure}[t!] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \scalebox{0.8}{ \includegraphics[width=\linewidth]{graph-mentions-lang-gephi.pdf} } \caption{Mentions between lingua groups. An edge from lingua group $x$ pointing to lingua group $y$ shows proportions of mentions that people in lingua group $x$ directed to people in lingua group $y$. For readability, we only show probabilities $\geq 0.05$.} \label{fig-top10} \end{subfigure}\quad \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\linewidth]{denis-sankey2.png} \caption{Retweet interactions between top 50 most active lingua groups.} \vspace{-3mm} \label{fig-alllanguages} \end{subfigure} \quad \caption{\emph{(a)} Nodes representing the top 10 lingua groups based on mentions. \emph{(b)} Interactions between lingua groups based on source language (src) retweeting posts in a target language (dst).} \label{figlinguas} \vspace{-5mm} \end{figure} To answer this question, we first define two types of interactions: (1) general interactions and (2) reciprocated interactions. We refer to \emph{general interactions} to all retweets and also to tweets containing mentions, while \emph{reciprocated interactions} correspond to reciprocated retweets and tweets with mentions. Secondly, we measure diversity using the Gini-Simpson index, as in \cite{Chimera04,KimHT14} who called it \emph{diversity index}. This diversity index ranges from 0 to 1 and it measures the probability that two lingua groups taken at random from a set of interactions belong to different lingua groups. Participants of a conference with diversity index close to 0 will have the tendency to interact with people of the same lingua group. Conversely, conferences with values close to 1 show a uniform distribution of interactions with other lingua groups. We define diversity $D$ of a lingua group as: \vspace{-2mm} \begin{equation} D(c,i)= 1-\sum_{j\in S}\left({\frac{I_{i,j}^{c}}{N_{i}}}\right)^{2} \vspace{-3mm} \end{equation}\label{eq:diversity} with $N_{i}=\sum_{k\in S} {I_{i,k}^c}$ and where $I_{i,j}^{c}$ is the total number of interactions between people of lingua $i$ and $j$. $N_{i}$ is the total number of interactions of people of lingua $i$ in conference $c$. In order to know the diversity of a conference, we average $D(c,i)$ over all the linguas in conference $c$. We see in Table \ref{tab:tbl_languages_number_total} the diversity for each conference (we represented it as a percentage). We find some interesting patterns showing that a lower percentage of monolinguals is linked to higher diversity. For example, ICMT is the most diverse conference for the general type of interactions and the percentage of monolinguals is the lowest of all (51\%). Conversely, AAAI shows high percentage of monolinguals (82\%) and the lowest diversity for the general interactions. On the other hand. reciprocal interactions do not show to be related to the percentage of monolinguals. For example, UBICOMP presents a high percentage of monolinguals and the highest diversity for the reciprocal interactions. Furthermore, we look at the attention \emph{received} by members of each lingua by calculating the number of mentions and retweets received from different users. Table \ref{tbl_mostpop_lang} shows the top 5 most popular lingua groups. Without doubt, English monolinguals are the most mentioned and retweeted in the general and reciprocated interactions. Albeit the fact that English monolinguals do not produce most of the tweets, they still receive most of the attention. This is mostly explained by the column \emph{out-links}, which shows the percentage of mentions and retweets about \emph{different} lingua group. For example, we see that only 37\% of the mentions and retweets generated by English monolinguals refer to other groups. Interestingly, Japanese bilinguals also prefer to interact mostly within their group. Conversely, groups like \emph{en-fr, de-en, en-es} refer more users of \emph{different} lingua groups in their interactions. More evidence of the unequal activity between lingua groups is seen in Figure \ref{figlinguas}, which considers only the top 10 lingua groups and shows (a) the mentions network (general type) and (b) the retweet network (general type) across lingua groups. Figure \ref{fig-top10} shows that 79\% of all mentions from the \emph{en} group also belong to the same group. Moreover, 35\% of mentions from the \emph{en-es} lingua group refer to users from the same group, and 48\% to the \emph{en} group. In Figure \ref{fig-alllanguages}, the Sankey plot represents the network of retweets. Again, here we see that for most of the cases the English group retweets members from the same group. At the same time, the English group receives most of the attention from other language communities. Interestingly, in similar proportion, lingua groups en-es-it, en-fr, en-es-pt and en-ja show a similar pattern, preferably retweeting users on their same lingua groups. \textbf{RQ3. Is there any effect of language or lingua group over online user interaction?} \label{RQ3} We addressed this question by studying how the number of languages a Twitter user speaks affects her online behavior. As already explained, if a user has posted tweets in only one language we consider her in the 1-lingua group (monolingual), while another user tweeting in two languages will be in the 2-lingua group, and so on. We found two results that show at general and at individual level the effect of the amount of languages on user interaction. At the general level, we found that among the users who posted tweets but who had not interacted with other people (by mentioning them), the percentage on monolinguals is considerably larger (80.6\%) than multilinguals. A different picture is seen among users who interacted at least once during the conference (by mentioning someone in a tweet), since only 62.9\% of those users are monolinguals and the rest are multilinguals. We conducted a chi-square test of proportions comparing the distribution of monolinguals, bilinguals and trilinguals between people who interacted and people who did not. We found a statistically significant difference with $\chi^2$ = 416.6, $df = 2$, $p\-value < .001$. This relation can be better observed in Figure \ref{fig-lingua-group}, where the group who interacted (right-side plot) had a more balanced distribution and hence a higher entropy (a measure of diversity \cite{shannon2001mathematical}) of $H(s) = 0.89$ compared to a smaller diversity on lingua groups among people who did not interact with an entropy $H(s) = 0.61$. Moreover, at the individual level we found that the more the languages a user speaks, the larger the likelihood to interact with others. Table \ref{tab-logit-results} shows the results of a logistic regression where the dependent variable measures whether the user \textit{interacted} with other people or not. The factors in the regression are the \textit{year} of the conference and the number of languages the user has used to tweet (\textit{n\_languages}). We observe that the number of languages has a significant $\beta$ coefficient of 0.666 ($p<.001$), which can be interpreted by saying that, keeping all the other factors fixed, for each additional language the user speaks the odds ratio of interacting in the network increases by $95\%$ (since $e^{0.666} = 1.95$). \begin{figure}[t!] \vspace{-4mm} \centering \scalebox{0.8}{ \includegraphics[width=\linewidth]{entropy_lingua_group.pdf} } \vspace{-4mm} \caption{Distribution of n-lingua groups considering users without (left graph) and with reply/mention interactions (right graph).} \vspace{-4mm} \label{fig-lingua-group} \end{figure} \begin{table}[!htbp] \small \vspace{-3mm} \centering \begin{tabular}{@{\extracolsep{5pt}}l{c}{c}{c} } \\[-1.8ex]\hline \\[-1.8ex] Variable & $\beta$ coeff. & S.E. \\ \hline \\[-1.8ex] year(=2009) & $2.049^{***}$ & $(0.390)$ \\ year(=2010) & $2.458^{***}$ & $ (0.385)$\\ year(=2011) & $2.453^{***}$ & $ (0.385)$ \\ year(=2012) & $2.294^{***}$ & $ (0.383)$ \\ year(=2013) & $2.423^{***}$ & $ (0.383)$ \\ n\_languages & $0.666^{***}$ & $ (0.035)$ \\ Constant & $-1.371^{***}$ & $(0.385)$ \\ \hline \\[-1.8ex] Observations & \multicolumn{1}{c}{26,281}\\ \hline \\[-1.8ex] \textit{Note:} & \multicolumn{1}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular} \caption{Results of L.R. where the D.V. is whether user interacted on Twitter (mentions) and the I.V.s are conference $year$ and number of languages spoken.} \vspace{-5mm} \label{tab-logit-results} \end{table} \vspace{-2mm} \section{Related Work} \label{section:related work} There are several studies on the role of Twitter in academic conferences. Letierce \emph{et al.}~\cite{dyl43,Letierce2010} showed that Twitter is frequently used to spread information across researchers using the official conference hashtags. Wen \emph{et al.}~\cite{WenCSCW14} studied conference participants and found that newcomer students receive little attention from senior members of the research community. In an extension of this work, Wen \emph{et al.}~\cite{WenHT14} expand their research by analyzing 16 conferences over five years, identifying factors that contribute to the continuing participation of users to the online Twitter conference activity. We have continued this line of research by exploring the influence of language during conferences The role of language in Twitter has also been studied. Hong \emph{et al.} \cite{Lichan201} studied differences in usage patterns between language communities in Twitter, while Kim et al. \cite{KimHT14} performed a sociolinguistic study on the role of mono- and bilinguals in Twitter across multilingual societies such as Qatar, Quebec and Switzerland. Inspired by them, we adopt similar methods to build language communities but we target different lingua groups interacting at conferences. A broader but certainly related topic of study is the impact of \emph{culture} in online communication. Garcia \emph{et al.} ~\cite{GarciaGavilanes2014} studied the most discriminative features influencing international active conversation and attention in Twitter by mapping \emph{nationality} to I.P addresses (e-mails) or geolocated tweets. Language and nationality are two important cultural dimensions in people's identities, but we find that focusing on language(s) we capture the multicultural nature of most researchers that attend international conferences. \vspace{-2mm} \section{Conclusions \& Future Work} \label{conclusions In this paper we show that the majority of users in Computer and Information Science conferences tweet only in English and most of the tweets are also posted in English. Nevertheless, our results indicate that members from other lingua communities produce most of the tweets and are more engaged than English monolinguals. A second observation is that although English is the lingua franca in academic conferences, apparently English monolinguals still prefer to interact more with themselves. The same happens for other important communities such as English-Japanese bilinguals. This is not the case for most of other important communities, who tend to interact more equally with members of other lingua. Our final finding is that there is more language diversity among people who interact with others on Twitter during conferences, compared to people who do not. This result suggests an important implication, which is that although English is the standard for scientific communication, the diversity in language use is a catalyst for interactions in a community. These findings leave us with several questions and encourage us to complement our work in several aspects. For example, which other aspects of people's culture can influence the communication gap across lingua groups? Can we identify that a research community requires more diversity by analyzing user interaction on Twitter? Can we identify user behavior related to specific lingua groups, such that we can differentiate English-Spanish bilinguals from English-German ones? \textbf{Acknowledgments:} This work was carried out during the tenure of an ERCIM ``Alain Bensoussan'' fellowship program by the 4th author. \small \balance \bibliographystyle{abbrv}
{ "timestamp": "2015-04-15T02:01:52", "yymm": "1504", "arxiv_id": "1504.03374", "language": "en", "url": "https://arxiv.org/abs/1504.03374" }
\section{Introduction} \label{sec:introduction} Many successful models of the broad band spectra of accreting black holes contain a contribution from a moderately hot, optically thick layer on the top of the relatively colder accretion disk. Such a layer is frequently postulated when the broad band spectrum of an accreting system resembles a power law in the soft X-ray band, with the photon index above~2. The standard theory of the \citet[][hereafter SS disk]{SS73} accretion disk model predicts that the spectrum of an accretion disk is well modelled as a multi-color black body, showing an exponential cut-off at high frequencies connected with the maximum of the temperature in the disk atmosphere. Some exceptional quasars/active galaxies and specific spectral states of X-ray binaries selected for the determination of the black hole spin show such a thermal cut-off \citep{steiner2010}, but in many cases the spectrum continues as a relatively steep/soft power law. The hard (2-100 keV) X-ray spectra of radio quiet AGN is generally characterised by a flat power law shape sometimes cut-off around 100 keV \citep[][and references therein]{jourdain92,maisack93,perola02,ballantyne2014,malizia2014} and the presence of reflection components (iron line, reflection hump) is commonly observed \citep{pounds90}. The soft (below 2 keV) band is generally characterised by an excess with respect to the extrapolation of this hard X-ray power law. This is the so-called soft X-ray excess. When fitted with a power law, it shows a steep ($\Gamma > 2$) spectral shape. The origin of this component is still unknown. It could be equally fitted by a blurred ionised reflection \citep{crummy2006}, a blurred ionised absorption \citep{gierlinski04} or by thermal Comptonized emission in a moderately hot, optically thick layer possibly located on the top of the colder accretion disk \citep{walter93,magdziarz98,done12,petrucci13}. The quasar composite spectra are well explained by such component, with the photon index $\Gamma \sim 2.5 $ \citep{laor1997,elvis12}. The whole class of Narrow Line Seyfert 1 galaxies has also similar soft X-ray slopes. Such specific spectral element is observed in galactic sources in the Very High State and Intermediate State \citep[e.g.][]{gierlinski03}. The roughly power law shape of this component as well as the correlation observed between the UV and soft-X-ray bands suggests Comptonization as a mechanism responsible for this emission. Furthermore, observations show that the significant fraction of the bolometric luminosity of the accreting system ($\sim 30 - 50 $\%) is carried out by this emitting layer \citep{vasudevan14}. Usually, it is postulated that the hot medium responsible for this radiation forms a skin or a corona above the inner parts of the accretion disk. This skin or corona is optically thick in many models of specific objects, with Thomson optical depth of the order of 2 - 20. The observed slope of the soft X-ray spectrum does not determine the optical depth of the scattering medium since it depends on the Compton $y$ parameter, i.e. a combination of the optical depth and the temperature. However, the fact that an unscattered disk component is not required in the fit or the direct detection of the turn-off imply rather low temperature and high optical depth. Such solutions are usually discussed in the context of specific observational data. \citet{white1982} required an optically thick corona to explain the temporal and spectral properties of neutron star sources 4U 1822-37, 4U 2729+47 and Cyg X-3 \citep[see also][]{bayless10}, although recent papers \citep[e.g.][]{iaria13} argue that the corona is very optically thin and the direct view to the neutron star in 4U 1822-37 is blocked by the outer rim of the accretion disk. \citet{magdziarz98} postulated the presence of a warm optically thick Comptonizing medium to fit the UV-Soft X-ray spectrum of NGC 5548. However, they did not expect it to co-exist with the cold disk as a vertical layer; instead, they suggested this medium as a radial transition region between the cold outer disk and a hot inner flow. \citet{zhang2000} modelled the spectrum of GRO J1655-40 with a warm layer at T=1.0 keV with the optical thickness $\tau=10$ located above a cold accretion disk. Indeed, they have considered three vertical layers: a cold disk, a warm skin and a hot corona. \citet{janiuk01} required the presence of a warm corona of optical depth equal to 12 in order to fit the soft X-ray spectrum of a quasar/NLS1 object PG 1211+143. \citet{zycki01} found coronal temperature $\sim 5$~keV and optical depth $\sim 3$ from their hybrid models of soft states of X-ray binaries: GS 1124–68 and GS 2000+25, and higher values of optical depth, $\sim 10$, were implied for some of the Very High State data sets. \citet{petrucci13} for the Seyfert 1 galaxy Mrk 509, found soft corona with temperature $\sim0.5 $~keV, and large optical depth $\sim 20$. Monte Carlo models of hot coronae with large optical depth were calculated by \citet{czerny03} in order to explain the broad-band spectra of quasar composite spectra, Ton S180, Mrk 359 and PG 1211+143, i.e. high accretion rate AGN. \citet{kubota04} obtained good spectral fits with the temperature of the order of 10 keV and optical depth $\sim 2$ in the two data sets for VHS of XTE J1 550-564. \citet{jin12} successfully fitted the broad band spectra of 51 AGN with a model of accretion disk thermal emission, a low temperature optically thick Comptonization and a hot optically thin corona. In their fits the electron temperature in the thick corona was in the range of 0.1 - 2 keV, and the optical depth from 4 to 40 in different objects. The model of the optically thick, low temperature corona surrounding the cold disk was also successfully applied to model the spectrum of a ULX source IC 342 \citep{ebisawa03} although the model was considered rather unphysical by the authors. Postulating an optically thick hotter medium sandwiching the colder disk inside is in apparent conflict with the results for the radiative transfer in the diffusion approximation. A temperature inversion is usually only obtained in the optically thin zone, whereas the temperature rises towards the interior of the celestial body in the optically thick zone. The best example is the solar corona. Thus the question arises of whether the cold disk would not heat up to reach the same temperature as the optically thick part of the corona? In this paper, we address this question using a very simple analytical model. We consider the vertical structure of the accretion disk/corona that is sketched in Fig.~\ref{fig:coronadisk}, and investigate the radiative and pressure equilibrium of the upper layers of the accretion flow as a function of the fraction of the accretion power that is dissipated in the corona. We show, that the disk embedded in the hotter optically thick medium indeed does not heat up much more than in the case of an optically thin surrounding corona discussed in a basic paper of \citet{haardt93}. We check conditions for which an optically thick, Compton cooled zone can exist in hydrostatic equilibrium with a specified underlying colder disk. The paper is organized as follows: in Sec.~\ref{sec:grey} we describe the solution of radiative transfer equation for the grey atmosphere with additional heating, in Sec.~\ref{sec:temp} the temperature structure is presented, in Sec.~\ref{sec:hyd} the hydrostatic equilibrium is solved, and in Sec.~\ref{sec:comp} we show when corona is dominated by Compton cooling. All results are discussed in Sec.~\ref{sec:dis}. \begin{figure}[t!] \vspace{1.5mm} \begin{center} \includegraphics[width=\columnwidth]{coronadisk.pdf} \end{center} \caption{Sketch of the dissipative slab corona atop the non-dissipative atmosphere of the accretion disk.} \label{fig:coronadisk} \end{figure} \section{Grey optically thick scattering medium with dissipation} \label{sec:grey} We are interested in the case of a grey atmosphere with pure scattering (i.e. we neglect emission and absorption). We assume that the atmosphere is optically thick, and therefore adopt the Eddington approximation in the whole medium. We allow for an additional dissipation which heats the corona. For simplicity, we assume that the input energy rate per unit optical depth (and solid angle), $Q$ is uniform from the surface ($\tau=0$) to the base of the corona ($\tau=\tau_{\rm cor}$). This corona is located above a cold accretion disk which itself dissipates energy due to the accretion flow. The assumption that all dissipated energy is emitted in the form of radiation allows us to derive a simple fully analytical solution for the local vertical temperature structure. At the surface of the corona the escaping radiation flux $F_{\rm acc}^{\rm tot}$ is the sum of the flux generated through internal dissipation inside the accretion disk, $F_{\rm disk}$, and that produced through dissipation in the corona, $F_{\rm cor}$: \begin{equation} F_{\rm acc}^{\rm tot}=F_{\rm disk}+F_{\rm cor} \, . \end{equation} The assumption that the heating of the corona is uniform gives: \begin{equation} F_{\rm cor}=4\pi Q\tau_{\rm cor} \, . \label{eq:fc} \end{equation} Additionally, we define a convenient parameter: \begin{equation} \chi = \frac{F_{\rm cor}}{F_{\rm acc}^{\rm tot}} \, , \label{eq:chi} \end{equation} which is the fraction of the total accretion power that is dissipated in the corona. The frequency-integrated radiation transfer equation with an additional energy input $Q$ can be written as: \begin{equation} \mu {dI \over d\tau} = I - J - Q \, , \label{eq:rt} \end{equation} where $\mu$ is the cosine of azimuthal angle. The optical depth, $\tau$ is measured downward, from the top of the corona toward the disk, $I(\mu,\tau)$ is the radiation intensity, and $J(\tau)$ is the mean intensity. The third term on the right hand side modifies the source function, $S$, and describes the increase in the photon energy due to the dissipation within the corona ($ S = J+ Q$). Following the standard Eddington approach, we can derive the solution of the radiative transfer equation by calculating its first moments, i.e. integrating over solid angles. The zeroth moment gives: \begin{equation} H(\tau) = -\tau Q + C_1 \, , \end{equation} The integration constant $C_1$ represents the Eddington flux at the top of the corona ($\tau=0$), which is, by definition, $C_1 = F_{\rm acc}^{\rm tot}/4\pi$. Using Eqs.~\ref{eq:fc} and \ref{eq:chi}, the Eddington flux profile can then be written as: \begin{equation} H(\tau) = \frac{F_{\rm acc}^{\rm tot}}{4\pi}\left(1 - \frac{\chi \tau}{\tau_{\rm cor}}\right) \, . \label{eq:eddf} \end{equation} We note that at $\tau = \tau_{\rm cor}$ where the corona touches the cold disk, the downward flux corresponding to the illumination of the disk by the corona cancels out the upward flux of reprocessed/reflected radiation from the disk. Therefore the net radiation flux at $\tau_{\rm cor}$ is only that caused by internal dissipation in the cold disk $F_{\rm disk}$. The first moment of the radiation transfer is: \begin{equation} K(\tau) = \frac{F_{\rm acc}^{\rm tot}}{4\pi}\left(\tau - { \chi \tau^2 \over 2 \tau_{\rm cor}} \right) + C_2 \,. \label{eq:MSc} \end{equation} In the Eddington approximation, we accept $K = J/3$ at every optical depth across the medium. In addition, at the corona surface we have only outgoing radiation flux, as in standard stellar atmosphere, so we have the condition $J(0) = 2 H(0)$ which allows to determine the constant $C_2$. Setting $\tau = 0$ in Eq.~\ref{eq:eddf} and~\ref{eq:MSc} we get $C_2= F_{\rm acc}^{\rm tot}/6\pi$, so that the mean intensity as a function of the optical depth in the optically thick corona is given by the expression: \begin{equation} J(\tau) = \frac{3 F_{\rm acc}^{\rm tot}}{4\pi} \left(\frac{2}{3} +\tau - { \chi \tau^2 \over {2 \tau_{\rm cor} }} \right) \, . \label{eq:jot} \end{equation} Equations~\ref{eq:eddf}, \ref{eq:MSc}, and \ref{eq:jot} are valid only in the warm corona (i.e. for $\tau<\tau_{\rm cor}$) and they are largely independent of the underlying accretion disk structure. However the same formalism can be used to extend these solutions deeper in the disk atmosphere in order to investigate the effects of the presence of the corona on the upper layers of the accretion disk. \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{rad_structure.pdf} \end{center} \caption{Typical radiation structure of the accretion disk corona and upper disk atmosphere. The upper panel shows the assumed dissipation profile, the middle and lower panel show the resulting radiation flux and pressure profiles, respectively.} \label{fig:profilrad} \end{figure} As in the standard SS 1-D disk, all flux is generated close to the equatorial plane \citep{rozanska1999}, we can neglect dissipation in the disk atmosphere, and assume that all the disk flux is generated below this layer at deeper optical depth ($\tau \gtrsim 10^5$), see Fig.~\ref{fig:coronadisk}. Here we consider only the properties of the upper non-dissipative atmosphere. For simplicity, we will not solve the flux dissipation equation down to the midplane as this would not change our final result of temperature profile on the border of the warm corona and the disk. The solution of Eq.~\ref{eq:rt} with $Q=0$, implies a constant Eddington flux below the corona. Then the value of the flux at $\tau_{\rm cor}$ sets the value of $H$ everywhere in the disk atmosphere: \begin{equation} H(\tau>\tau_{\rm cor})= \frac{F_{\rm disk}}{4\pi}= \frac{F_{\rm acc}^{\rm tot}}{4\pi}(1-\chi) \, . \label{eq:hatm} \end{equation} The first moment of the radiation transfer then gives: \begin{equation} K(\tau>\tau_{\rm cor})= \frac{F_{\rm acc}^{\rm tot}}{4\pi}(1-\chi)\tau+C_3 \, . \end{equation} The constant $C_3$ is determined from the condition of continuity of $K$ (and $J$) at $\tau_{\rm cor}$: \begin{equation} C_3= \frac{F_{\rm acc}^{\rm tot}}{4\pi}\left(\frac{2}{3}+\frac{\chi}{2} \tau_{\rm cor}\right) \, . \end{equation} And finally the mean intensity in the disk atmosphere is: \begin{equation} J(\tau>\tau_{\rm cor})= \frac{3F_{\rm acc}^{\rm tot}}{4\pi}\left[(1-\chi)\tau +\frac{\chi}{2}\tau_{\rm cor}+\frac{2}{3}\right] \, . \label{eq:jda} \end{equation} We note that in absence of corona ($\chi=0$) this equation reduces to the standard mean intensity profile of grey atmospheres: \begin{equation} J_{\rm disk}= \frac{3F_{\rm disk}}{4\pi}\left(\tau+\frac{2}{3}\right) \, . \end{equation} In the presence of a warm corona, there is an additional component $J_{\rm cor}$ to the mean intensity of the disk that is due to the illumination of the atmosphere by the corona, $J=J_{\rm disk}+J_{\rm cor}$, which is: \begin{equation} J_{\rm cor}= \frac{3F_{\rm cor}}{4\pi}\left(\frac{\tau_{\rm cor}}{2}+\frac{2}{3}\right) \, . \end{equation} The typical vertical structure of the radiation properties of the corona/disk atmosphere system is sketched in Fig.~\ref{fig:profilrad}. \begin{table}[b!] \begin{center} \caption{Parameters of the fiducial solutions illustrated in Figs.~\ref{fig:profil} and \ref{fig:profilrho}. For all these models, the total accretion flux is set to $F_{\rm acc}^{\rm tot} = 3.3 \times 10^{14}$ erg s$^{-1}$ cm$^{-2}$. The values of $\tau_{\rm cor}$ were choosen to be the maximum possible Thomson depth of a Compton cooled corona in hydrostatic equilibrium for the corresponding $\beta_{\rm m}$, $\mathcal{G}$ and $\chi$ (see Sect.~\ref{sec:comp}). $T_{\rm av}$ is the resulting average temperature of the corona estimated using Eq.~\ref{eq:ave}.} \begin{tabular}{cccccc} \hline Model number & $\beta_{\rm m}$ & $\mathcal{G}$ & $\chi$ & $\tau_{\rm cor}$ & $kT_{\rm av}$ (keV) \\ \\ 1 & 0 & 0 & 0.98 & 5.21 & 3.91\\ 2 & 50 & 0 & 0.98 & 19.9 & 0.42 \\ 3 & 50 & 5 & 0.98 & 8.76 & 1.68\\ 4 & 50 & 0 & 0.4 & 15.9 & 0.23\\ 5 & 50 & 2 & 0.4 & 7.08 & 0.88\\ 6 & 50 & 0 & 0.02 & 9.28 & 2.68 $\times 10^{-2}$\\ \\ \hline \label{tab:para} \end{tabular} \end{center} \end{table} \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{profilt.pdf} \end{center} \caption{Temperature profiles in the disk/corona system for the various values of the parameters $\chi$, $\tau_{\rm cor}$ of the six fiducial models reported in Table~\ref{tab:para}. Each curve is labelled with its model reference number, corresponding to that given in Table~\ref{tab:para}. } \label{fig:profil} \end{figure} \section{The temperature profile} \label{sec:temp} The next step is to determine the temperature profile across the disk and corona using mean intensity field. The temperature profile is derived assuming that matter is in equilibrium with the radiation field. The resulting temperature depends on the radiative cooling mechanism operating in the gas. \subsection{Temperature profile in the warm corona} The radiative transfer solution for grey atmosphere does not specify the temperature profile in a purely scattering medium which is the case of the soft corona, i.e. for $\tau < \tau_{\rm cor}$. However, we can obtain the temperature profile taking into account that the scattered photons must cool the corona through Comptonization, in order to have the thermal heating/cooling balance. When cooling is dominated by inverse Compton scattering in the Thomson regime by a thermal population of sub-relativistic temperature, the balance between cooling and heating reads: \begin{equation} J (\tau) {4 k T_{\rm cor}(\tau) \over m_{\rm e} c^2} = Q \label{eq:bil} \end{equation} where $T_{\rm cor}$ is the corona temperature, $c$ - the velocity of light, $k$ - the Boltzmann constant, and $m_{\rm e}$ - the electron rest mass. Since $J$ is an increasing function of the optical depth (see Eq.~\ref{eq:jot}) and $Q$ is assumed to be a constant, the electron temperature decreases with $\tau$, showing a temperature inversion: \begin{equation} kT_{\rm cor}(\tau)={ \chi m_{\rm e} c^2 \over 12 \tau_{\rm cor}} \left( {2 \over 3}+\tau - {\chi \tau^2 \over 2 \tau_{\rm cor}} \right)^{-1} \label{eq:tem} \end{equation} The temperature remains hot from the coronal surface down to $\tau=2/3$, then it slowly decreases. Depending on the amount of dissipated energy, the soft corona can be quite hot down to moderate optical depth. The vertical temperature profile of the soft corona above the cold disk is determined by only two parameters: the total optical depth $\tau_{\rm cor}$ of the skin/corona, and the fraction $\chi$ of flux dissipated inside the corona to the total flux dissipated inside the disk and corona. In~Fig.~\ref{fig:profil} we show examples of full temperature profiles for different values of these parameters. Note, that this profile strictly comes from the radiation properties and does not imply that the hydrostatic equilibrium is satisfied in this multi-zone structure. We address this issue in the section below. One-zone Comptonization models such as those used to derive the observational properties on the warm corona only constrain the average temperature of the skin, without any constrains on the temperature stratification. Since, in our models temperature changes with optical depth, we should compare the value of the average temperature of the warm skin weighted by its optical depth with that determined from observations: \begin{equation} T_{\rm av} = {1 \over \tau_{cor}} \int_0^{\tau_{\rm cor}} T_{\rm cor}(\tau) d \tau= {\chi m_e c^2 \over 12 k u \tau_{\rm cor}^2} \ln{\left(\frac{1+\frac{\chi}{u-1}}{1-\frac{\chi}{u+1}}\right)}\, , \label{eq:ave} \end{equation} where \begin{equation} u=\sqrt{1+\frac{4\chi}{3\tau_{\rm cor}}}. \end{equation} In the limit of large $\tau_{\rm cor}$, the average temperature given by Eq.~\ref{eq:ave} can be approximated within 10 percent (for $\tau_{\rm cor}>3$) as: \begin{equation} kT_{\rm av} \simeq {\chi m_{\rm e}c^2 \over 12 \tau_{\rm cor}^2} \ln{\left(\frac{3\tau_{\rm cor}}{2-\chi}\right)} \, . \label{eq:tav} \end{equation} The dependence of the average coronal temperature on $\tau_{\rm cor}$ is plotted in Fig.~\ref{fig:tvstau} for several values of the $\chi$ parameter. We can clearly see, that a warm skin, cooled by Comptonization, can be produced even for large values of the coronal optical depth provided that most of the accretion power is dissipated in the warm corona. The coronal temperature however decreases with decreasing $\chi$ and increasing $\tau_\mathrm{cor}$. As a consequence, at small $\chi$, i.e. strong disk dissipation, a hot corona with pure scattering is only consistent with the most moderate observations (i.e. $\tau_{\rm cor}$ of the order of a few). From the energy equilibrium requirement we can produce a layer of $T_{\rm cor} \sim 0.5-1$ keV and large Thomson depth $\tau_{\rm cor } \leqslant 15 $ only if most of the accretion power is dissipated in the layer. This appears to be consistent with the observational results of \cite{petrucci13} in the case of Mrk~509. These authors infer such parameters for the soft corona and also argue that the observed relative luminosity of the disk and soft corona implies that the disk is passive (i.e. $\chi\simeq 1$). The case of a passive disk provides the maximum achievable temperature for a given coronal depth. We note that some numerical simulations of accretion disks also show a stronger dissipation in the outer layers of the disk \citep{ht2011} We also note that the method presented here is reasonably accurate. Our results can be compared for example to the Monte-Carlo simulations presented in Malzac, Beloborodov \& Poutanen (2001), in the case of a slab corona with Thomson depth of $\tau_{\rm cor}=3$ above a passive disk. They obtained an average temperature $\simeq 9$ keV (see their figure~2) which is in excellent agreement with the present results. On the other hand for optically thin, coronae, the temperature becomes mildly relativistic, our approximations break down and the simple analytical model underestimates radiation cooling. For $\tau_{\rm cor}=0.5$ and $\chi=1$, Eq.~\ref{eq:ave} gives $kT_{\rm av}\simeq103$ keV while Malzac, Beloborodov \& Poutanen (2001) obtain $kT_{\rm av}\simeq70$ keV with their detailed calculation. \begin{figure}[h!] \begin{center} \hspace{-0.5cm} \includegraphics[width=9cm]{ktvstau.pdf} \end{center} \caption{Average coronal temperature vs. optical depth. The full curves show the dependence of the average coronal temperature on the total optical depth of the corona for various values of the fraction of total power dissipated in the corona, $\chi$, as labelled. The dashed curves represent the hydrostatic equilibrium solutions providing the largest possible Thomson depth of the Compton cooled corona for a given $\chi$ and magnetic to gas pressure ratio $\beta_{\rm m}$ (see Sect~\ref{sec:comp}). Each of the dashed curve shows the track of the solutions in the $kT_{\rm av}$-$\tau_{\rm cor}$ plane for a fixed value of $\beta_{\rm m}$ (as labelled) when $\chi$ is varying. } \label{fig:tvstau} \end{figure} \subsection{Temperature profile in the disk atmosphere}\label{sec:tempatm} Deep in the optically thick atmosphere of the disk ($\tau>\tau_{\rm cor}$) we can assume that radiation is fully thermalised ($J = \sigma T^4/\pi $, where $\sigma$ is the Stefan constant). Using Eq.~\ref{eq:jda}, we obtain the following temperature structure: \begin{equation} T_{\rm atm}^4= \frac{3F_{\rm acc}^{\rm tot}}{4\sigma} \left[(1-\chi)\tau+ \frac{2}{3}+\frac{\chi\tau_{\rm cor}}{2}\right]\, . \label{eq:tther1} \end{equation} This expression can be rewritten as: \begin{equation} T_{\rm atm}^4=\frac{3}{4}T_{\rm disk}^4\left(\tau+\frac{2}{3}\right)+\frac{\pi J_{\rm cor}}{\sigma} \, . \label{eq:tther2} \end{equation} The first term on the right hand side of Eq.~\ref{eq:tther2} corresponds to the standard temperature structure for fully thermalised grey atmosphere in the Eddington approximation. $T_{\rm disk}$ is the disk effective temperature in absence of warm corona, calculated from the intrinsic disk flux $F_{\rm disk}$. The constant second term represents the increase in disk temperature due to the coronal illumination. Unlike $T_{\rm cor}$, the temperature profile of the disk atmosphere, $T_{\rm atm}$, depends on the accretion flux $F_{\rm acc}^{\rm tot}$. Using standard accretion disk theory, $F_{\rm acc}^{\rm tot}$ can be estimated as a function of the mass of the black hole, $M_{BH}$ ,and Eddington luminosity fraction, $\dot{m}=L/L_{\rm Edd}$, at a given radius $R=r GM_{\rm BH}/c^2$: \begin{equation} F_{\rm acc}^{\rm tot}\simeq 8\times 10^{26} \quad \frac{\dot m}{m} \frac{f}{r^{3}} \quad {\rm erg \, s}^{-1} \, {\rm cm}^{-2} \, , \end{equation} where $m=M_{\rm BH}/M_{\odot}$, $f=2r_i(1-\sqrt{r_i/r})$ and $r_i$ is the inner radius of the disk expressed in gravitational radii. In Fig.~\ref{fig:profil}, the profile in the disk atmosphere was calculated for a black hole of mass $M_{\rm BH} = 1.4 \times 10^8 M_{\odot}$ \citep{liu1983}, at 10 gravitational radii from the black hole, and at an accretion rate equal to 2\% of the Eddington accretion rate, and we set $r_i=5$. For these parameters the accretion flux is $F_{\rm acc}^{\rm tot} = 3.3 \times 10^{14}$ erg s$^{-1}$ cm$^{-2}$. As can be seen on Fig.~\ref{fig:profil}, the temperature profile in the upper layers of the disk flattens and departs from the standard grey atmosphere temperature profile due to the strong coronal illumination only when most of the power is dissipated in the warm corona. The absorption of coronal photons by the cold disk only slightly increases disk effective temperature which remains significantly lower than that of the corona. Due to additional dissipation in the soft corona, all temperature profiles show a strong temperature inversion in the disk/corona system. At the transition between the disk and corona there is a discontinuity due to the change in cooling mechanism. The amplitude of the temperature jump can be estimated as: \begin{equation} \left. \frac{T_{\rm atm}}{T_{\rm cor}} \right|_{\tau=\tau_{\rm cor}} \simeq 0.12 \left( \frac{\dot{m}}{m} \frac{f}{r^3}\right)^{1/4} \frac{\tau_{\rm cor}}{\chi}\left[\frac{2}{3}+\tau_{\rm cor}\left(1-\frac{\chi}{2}\right)\right]^{5/4} \, . \end{equation} This ratio remains lower than unity over a very broad range of black hole masses, mass accretion rates and disk radii. In practice, only when $\chi$ vanishes, the temperature of the disk at $\tau_{\rm cor}$ can become comparable or even hotter than that of the corona. We note that in reality the change in cooling mechanism might not be as brutal as we have assumed here and the temperature discontinuity could be smoothed. The computations of the disk/corona transitions are difficult but it appears however that the temperature drop is always very sharp even if all the radiation processes are fully taken into account \citep{jurek2000,ballantyne2001,nayakshin2001,rozanska2002}. \section{Hydrostatic equilibrium for the corona/disk system.} \label{sec:hyd} The question arises if such a two-zone system can be in hydrostatic equilibrium. Here, we derive analytical formulae for the pressure profile in the disk-corona system. We use the standard equation of vertical hydrostatic equilibrium in geometrically thin disk \citep{madej2000}. The total pressure, $P$, is the sum of the gas pressure $P_{\rm gas}$, the radiation pressure $P_{\rm rad}$, and the magnetic pressure $P_{\rm mag}$. Locally, the total pressure has to balance the gravitational force: \begin{equation} {dP_{\rm gas} \over d{\tau}} + {dP_{\rm mag} \over d{\tau}}= {1 \over \kappa_{\rm es}} {GM_{\rm BH} \over R^3} z - {dP_{\rm rad} \over d{\tau}}, \label{eq:presseq} \end{equation} where $G$ is the gravitational constant, and $\kappa_{\rm es}$ - the Thomson scattering cross section. In a general approach the total opacity should be taken into account, but for simplicity, we take only Thomson scattering into account and set $\kappa_{\rm es}=0.34$ cm$^{2}$~g$^{-1}$. In order to obtain analytical solutions, we will assume that the warm corona is geometrically thin compared to the scale-height of the disk $Z_{\rm disk}$, so that the vertical distance to the equatorial plane, $z$ , can be considered a constant $z=Z_{\rm disk}$. This implies that the gravitational force is constant along the vertical direction inside the corona and the upper layers of the disk. For the grey atmosphere, the radiation pressure gradient depends on the flux expressed in Eq.~\ref{eq:eddf} for the corona and Eq.~\ref{eq:hatm} for the disk atmosphere: \begin{equation} {dP_{\rm rad} \over d{\tau}} = { 4 \pi \over c } H \,\,. \label{eq:prad1} \end{equation} \subsection{Pressure and density profile in the warm corona}\label{sec:rhocor} In the corona, the radiation pressure profile is obtained directly from Eq.~\ref{eq:jot}: \begin{equation} P_{\rm rad} =\frac{4\pi J}{3c}= { F_{\rm acc}^{\rm tot} \over c } \left(\tau + {2 \over 3} - {\chi \tau^2 \over 2 \tau_{\rm cor}} \right) \,\,. \label{eq:prad} \end{equation} We assume a uniform magnetic to gas pressure ratio $\beta_{\rm m}$: \begin{equation} P_{\rm mag}= {B_{\rm mag}^2 \over 8 \pi} = \beta_{\rm m} P_{\rm gas} \,\,. \label{eq:mag} \end{equation} Solving the equation of hydrostatic equilibrium~(\ref{eq:presseq}), we find an expression for the gas pressure structure assuming, as a boundary condition, that $P_{\rm gas}(\tau=0) =0$: \begin{equation} P_{\rm gas} =\frac{ F_{\rm acc}^{\rm tot}} {(1+\beta_{\rm m})c} \left( \mathcal{G} \tau + {\chi \tau^2 \over 2 \tau_{\rm cor}} \right) \, , \label{eq:pgas} \end{equation} where the constant $\mathcal{G}$ represents the ratio of the pressure forces of the gas and magnetic field to that of radiation at the surface of the corona: \begin{equation} \left . \mathcal{G} = \left(\frac{dP_{\rm gas}}{d\tau}+\frac{dP_{\rm mag}}{d\tau}\right)/{\frac{dP_{\rm rad}}{d\tau}}\right|_{\tau=0}. \label{eq:gg2} \end{equation} The radiation pressure force dominates the support of the corona at all depths for $\mathcal{G}<1-2\chi$. $\mathcal{G}$ can also be expressed as: \begin{equation} \mathcal{G} = { {G M_{\rm BH}} \over R^3} {{c Z_{\rm disk}} \over {\kappa_{\rm es} F_{\rm acc}^{\rm tot}}}-1\, . \label{eq:gg} \end{equation} The first term in the right hand side of Eq.~\ref{eq:gg} is the ratio of the gravitational to radiation pressure force at the surface of the corona. For a corona in hydrostatic equilibrium this ratio is necessarily larger than unity and consequently $\mathcal{G} \ge 0$. The half of the disk thickness $Z_{\rm disk}$ is a crucial parameter which controls the hydrostatic equilibrium in the vertical direction \citep{roza1999}. The case $\mathcal{G}=0$ gives the minimum disk thickness, for which the total disk pressure can be balanced by the gravitational force: \begin{equation} Z^{\rm min}_{\rm disk}= {\kappa_{\rm es}\, F_{\rm acc}^{\rm tot} R^3 \over G M_{\rm BH} \, c} = \frac{3}{2} \frac{GM_{\rm BH}}{c^2}f \dot{m}\,. \end{equation} For a thinner disk, the matter will be outflowing from the system \citep{witt97}. The density profile clearly depends on the assumed disk geometrical thickness $Z_{\rm disk}$, and we can derive this density from the equation of state as: \begin{equation} \rho = {{\mu m_{\rm H}} \over kT_{\rm cor}} {F_{\rm acc}^{\rm tot} \over (1+\beta_{\rm m}) c} \left( \mathcal{G} \tau + {{\chi \tau^2} \over 2 \tau_{\rm cor}} \right)\,\,, \label{eq:rho} \end{equation} where $\mu$ is a mean molecular weight assumed to be 0.5, and $m_{\rm H}$ is the mass of hydrogen atom. The density increases with $\tau$ and from Eq.~\ref{eq:rho} we see that setting the disk parameter $\mathcal{G}=0$ (or equivalently $Z_{\rm disk}=Z^{\rm min}_{\rm disk}$) minimises the density in the corona. This is illustrated in Fig.~\ref{fig:profilrho} which displays examples of density profiles obtained for the parameters listed in Table~\ref{tab:para}. \subsection{Pressure and density profiles in the disk atmosphere}\label{sec:diskatm} The pressure and density profiles in the disk atmosphere below the corona ($\tau>\tau_{\rm cor}$) can be estimated in a similar way. The radiation pressure profile is given by Eq.~\ref{eq:jda}: \begin{equation} P_{\rm rad}= \frac{F_{\rm acc}^{\rm tot}}{c}\left[(1-\chi)\tau +\frac{\chi}{2}\tau_{\rm cor}+\frac{2}{3}\right]. \label{eq:pda} \end{equation} The pressure equilibrium equation ~(\ref{eq:presseq}) is solved using Eq.~\ref{eq:hatm} and assuming continuity of pressure at the disk/corona transition: \begin{equation} P_{\rm gas}= \frac{F_{\rm acc}^{\rm tot}}{(1+\beta_{\rm m})c}\left[(\mathcal{G}+\chi)\tau-\frac{\chi}{2}\tau_{\rm cor}\right] \, . \end{equation} The density profile follows: \begin{equation} \rho = {{\mu m_{\rm H}} \over kT_{\rm atm}} {F_{\rm acc}^{\rm tot} \over (1+\beta_{\rm m}) c} \left[(\mathcal{G}+\chi)\tau-\frac{\chi}{2}\tau_{\rm cor}\right] \,\,, \label{eq:rhoatm} \end{equation} Fig.~\ref{fig:profilrho} shows some examples of density profiles around the disk transitions for our fiducial value of the total accretion flux. In all cases, we observe a discontinuity in density at $\tau_{\rm cor}$ which has the same amplitude as the temperature jump discussed in the Sect.~\ref{sec:tempatm}. Due to the temperature jump and the condition of pressure equilibrium at the disk/corona transition, the disk atmosphere tends to be much denser than the corona. \begin{figure}[t!] \begin{center} \hspace{-0.5cm} \vspace{-1.0cm} \includegraphics[width=9cm]{profilrho.pdf} \end{center} \caption{Density profile around the disk corona transition for the 6 fiducial models detailed in Table~\ref{tab:para}. The curves are labelled by their model number.} \label{fig:profilrho} \end{figure} \subsection{Limitations} In Sect.~\ref{sec:hyd} we have estimated the properties of a warm corona and disk atmosphere in pressure equilibrium. Our proposed treatment presents the advantage of being very simple. The drawbacks of this simplicity are some limitations that we now briefly discuss. First, althought the effects of the disk on the corona and disk atmosphere is taken into account via the $\mathcal{G}$ parameter, the present approach does not allow us to guarantee that there is indeed a disk solution below the atmosphere that both has the required $\mathcal{G}$ parameter and connects smoothly to the atmosphere. A full calculation of the vertical stratification of the disk down to the mid-plane, including also dissipation, would be required in order to obtain such self-consistent solutions and calculate $Z_{disk}$ from first principles. Also we note that for the density profiles with the lowest densities in the corona, our assumption of constant gravity may be inaccurate. Indeed for these profiles the scale height of the corona: \begin{equation} H_{\rm cor}\sim\int_{\tau_{\rm cor}/2}^{\tau_{\rm cor}}(\kappa_{\rm es}\rho)^{-1}d\tau \, , \end{equation} can be be comparable, or even larger than $Z_{\rm disk}$. In this case the gravity at the surface of the corona is significantly larger than at the bottom. Taking into account the height dependent gravity would require a numerical resolution of the equilibrium which is out of the scope of this paper, but we can anticipate its effects. Indeed, if we assume that the disk/corona transition is located at a height $Z_{\rm disk}$, the increased gravity force in the upper corona will necessitate a larger pressure in order to sustain the equilibrium, and as a consequence the coronal density will also be increased compared to our current estimates. In particular this effect may affect our results for small $Z_{\rm disk}$ (or small $\mathcal{G}$), which may underestimate the pressure and and density in the corona by a factor of up to a few. Finally our calculation of the pressure equilibrium assumes that the opacity is dominated by electron scattering. If absorption becomes important, both pressure and density will be reduced compared to our simple estimates. We have checked a posteriori that for the fiducial models presented in Figs.~\ref{fig:profil} and~\ref{fig:profilrho} the Kramer free-free absorption opacity $\kappa_{\rm ff}\simeq 6 \times 10^{22} \rho T^{-7/2}$ cm$^{2}$ g$^{-1}$ is negligible compared to $\kappa_{\rm es}$ both in the the corona an the atmosphere of the disk. For these models our estimated pressure profiles are not affected by the approximation of a pure scattering medium. We stress however that since even in the disk atmosphere the medium is only weakly absorbing, the assumption of fully thermalised radiation used to infer the temperature profile of the atmosphere may break down close to the disk/corona transition, making the transition much more gradual than our simplified calculation suggest. A detailed investigation of all these issues is deferred to future works. \section{Constraints from the requirement of a Compton cooled corona} \label{sec:comp} In the previous sections we have determined the temperature, pressure and density profile of the corona under the assumption that the dominant cooling mechanism is Compton scattering. This assumption was motivated by observational results based on the modelling the coronal emission with Comptonization models. We now have to determine the parameter regimes for which this assumption remains valid. Besides Compton cooling, the most efficient cooling mechanism is expected to be bremsstrahlung which must remain negligible compared to Compton cooling. Here we estimate the ratio of Compton cooling rate $\Lambda_{\rm C} = 16 \pi kT / (m_{\rm e} c^2) \, \rho \, \kappa_{\rm es} J(\tau) $ (in erg/s/cm$^3$) to the bremsstrahlung cooling rate $\Lambda_{\rm B} = B \, \rho^2 T^{1/2}$, where $B = 6.6 \times 10^{20}$ CGS units. This ratio must remain larger than unity across the soft corona. Using Eqs.~\ref{eq:jot} and \ref{eq:tem} we get the condition: \begin{equation} {\Lambda_{\rm C} \over \Lambda_{\rm B}} = A \frac{1+\beta_{\rm m}}{\left(\tau_{\rm cor}/\chi\right)^{3/2}} \left( {2 \over 3} + \tau - { \chi \tau^2 \over 2 \tau_{\rm cor} }\right)^{-\frac{1}{2}}\left(\mathcal{G}\tau+\frac{\chi\tau^2}{2\tau_{\rm cor}}\right)^{-1}\ge 1, \label{eq:cond} \end{equation} with the constant $A = \sqrt{k m_{\rm e}} \, c^2 \kappa_{\rm es} / (\sqrt{12} B \, \mu \, m_{\rm H})\simeq 57$. The above ratio is a decreasing function of optical depth, therefore the condition~\ref{eq:cond} is verified in the whole corona if it verified at $\tau=\tau_{\rm cor}$. The maximum possible Thomson depth for a Compton cooled corona is obtained by setting the condition that the depth $\tau_{\rm cor}$ corresponds to the transition between Compton dominated to bremsstrahlung dominated regions, i.e. by solving $\left. \Lambda_{\rm C} / \Lambda_{\rm B} \right|_{\tau=\tau_{\rm cor}}=1$. The coronal optical depth of the fiducial models of Table~\ref{tab:para} leading to the profiles presented in Fig.~\ref{fig:profil} and \ref{fig:profilrho} were determined in this way and correspond to the deepest possible Compton corona for the given set of $\chi$, $\beta_{\rm m}$ and $\mathcal{G}$. From Eq.~\ref{eq:cond}, we also see that the case of $\mathcal{G} = 0$, or equivalently $Z_{\rm disk}=Z^{\rm min}_{\rm disk}$, gives the most favorable condition to have a purely Compton cooled corona because it corresponds to the minimum of the gas pressure and density. Therefore, in order to find regimes where our assumptions fail, it is enough to calculate this ratio at the base of corona for $\mathcal{G}=0$: \begin{equation} \left. {\Lambda_{\rm C} \over \Lambda_{\rm B}} \right|_{\tau=\tau_{\rm cor}} \cong \frac{\sqrt{8}A (1+\beta_{\rm m}) }{\tau_{\rm cor}^3 \sqrt{2/\chi-1}} \ge 1. \label{eq:base} \end{equation} We stress that this equation represents the necessary condition for the Compton cooling dominance and does not depend on any disk parameters. \begin{figure} \begin{flushright} \hspace{-5.7mm} \vspace{-3.0mm} \includegraphics[width=9.5cm]{compt_ratio.pdf} \end{flushright} \caption{The importance of Compton scattering over the bremsstrahlung at the base of corona as a function of $\tau_{\rm cor}$ (Eq.~\ref{eq:base}). Solid lines in each package are computed for various values of $\chi=0.18, 0.26, 0.40, 0.57, 0.67$ and 0.86. Each package is calculated for three different values of magnetic pressure: $\beta_{\rm m}=0, 10$ and 100. The horizontal dashed line represents the case where $ \Lambda_{\rm C} / \Lambda_{\rm B} =1 $, while vertical dotted lines mark values of coronal optical depth for which it happens. } \label{fig:compt} \end{figure} Fig.~\ref{fig:compt} shows the Compton to bremsstrahlung cooling ratios as a function of coronal optical depth for three different values of magnetic pressure in the case $\mathcal{G}=0$. For each value of $\beta_{\rm m}$ we consider various values of $\chi$ to show the marginal influence of this parameter. We see that the maximum possible Thomson depth of the corona in the case $\mathcal{G}=0$ provides an absolute upper limit on the depth of the corona. This upper limit can be estimated as: \begin{equation} \tau_{\rm cor}\simeq 5.4 \quad (1+\beta_{\rm m})^{1/3}(2/\chi-1)^{-1/6} \, . \label{eq:taumax} \end{equation} The corresponding average temperature of this `deepest' possible corona can then be estimated using Eq.~\ref{eq:tav}: \begin{equation} kT_{\rm av}\simeq 4 \, \frac{\chi^{\frac{2}{3}}(2-\chi)^{\frac{1}{3}}}{(1+\beta_{m})^{\frac{2}{3}}} \left[1-1.2\times 10^{-4} \ln{\frac{\chi^{-1}(2-\chi)^7 }{(1+\beta_{\rm m})^{2}}}\right] \, {\rm keV.} \end{equation} Eq.~\ref{eq:taumax} shows that when magnetic pressure is zero, bremsstrahlung becomes the dominant emission process as soon as $\tau_{\rm cor} \gtrsim 5$. Such solutions are therefore inconsistent with an unmagnetised, static, Compton dominated, hot, and optically thick corona. Relaxing one of these constraints might produce consistent solution. For instance, the corona might not be in static equilibrium. The case of outflowing coronae \citep{witt97} in the frame of our model will be investigated in a future work. Alternatively, additional magnetic pressure helps the gas pressure to balance gravity. Hence it produces solutions with lower density, i.e. with lower bremsstrahlung cooling rate. Fig.~\ref{fig:tvstau} maps the iso-contours of $\chi$ and $\beta_m$ for the deepest possible corona, in the $\tau_{\rm cor}$ vs $kT_{\rm av}$ plane. This diagram allows one to estimate the minimum value of the magnetic field pressure ratio required in order to produce a corona with a given optical depth and temperature. For instance, the warm corona recently observed in Mrk 509 by Petrucci et al. (2013), with optical thickness $\tau_{\rm cor} \sim 15$ and temperature $T_{\rm av}\simeq0.5$ keV, implies magnetic to gas pressure ratio $\beta_{\rm m}>30$. For $M_{\rm BH}=1.4 \times 10^8 M_\odot$, a distance of 10 gravitational radii, and an accretion rate equals to 2\% of the Eddington accretion rate, this implies that the average magnetic field in the corona must be larger than $10^3$ G. Since the calculations are based on the assumption of $\mathcal{G} = 0$, they provide the necessary conditions for the Compton cooling condition to be satisfied. If Compton cooling dominance is not obtained, no change of $Z_{\rm disk}$ will save the situation, and if Compton cooling dominance is obtained, there is always a range of $Z_{\rm disk} \geqslant Z ^{\rm min}_{\rm disk}$, for which the hydrostatic equilibrium is sustained, and density is low enough for Compton cooling to dominate over bremsstrahlung cooling. We note however that in the case of minimal density in the corona ($\mathcal{G}=0$) our approximation of constant gravity force in the vertical direction can break down (see Sec.~\ref{sec:rhocor}). As a consequence our simple calculations may underestimate the density and bremsstrahlung cooling rate. Relaxing the assumption of constant gravity may reduce the maximum possible Thomson depth of the corona and/or require even stronger magnetic fields to maintain the dominance of Compton cooling. \section{Discussion and Conclusion} \label{sec:dis} In this paper, we put constraints on the existence of a warm, dissipating, optically thick, and Compton cooled corona in hydrostatic equilibrium with a cold accretion disk. We neglect synchrotron photons, thermal conduction and ionisation in the warm skin, but we have checked that those processes are not important comparing to Comptonization. In our computations of the hydrostatic equilibrium in the vertical direction, the radiation pressure component is fully taken into account, contrary to numerical simulations by \citet{schnit2013,uzdensky13}. Our simple analytical solution for the warm and dissipative corona above the cold disk shows that a stable temperature inversion is possible in the optically thick case, contrary to the intuitive expectations based on the diffusion approximation. This is due to the fact that the solution of purely scattering atmosphere does not specify the temperature. It only gives the radiation density, which rises with the optical depth, as expected. The temperature in the warm corona is determined a posteriori, from Compton cooling balance, and the temperature increases toward the warm skin surface. We have shown that such corona can reach temperatures of 0.5-1 keV for assumed values of constant dissipation in the skin of moderate optical depth ($\tau_{\rm cor}<10$). The most extreme parameters e.g. a coronal temperature 0.5 keV at optical depth 15-20, which is observed in the case of Mrk~509 \citep{petrucci13}, can also reproduced provided that the disk is passive (i.e. almost all of the accretion power is dissipated in the corona). Nevertheless, if this zone is in hydrostatic equilibrium with the cold accretion disk, the maximum optical depth of the corona cannot exceed $\sim 5$ without any additional magnetic pressure.This upper limit is independent of the disk parameter. Higher optical depth of the warm skin is possible if the gas pressure is lowered, by magnetic pressure, or possibly by mass outflow. In this paper, we illustrate only the first case i.e. non zero value of magnetic field strength. When, the ratio of magnetic pressure to the gas pressure is 100, the maximal optical depth of the warm corona is around 20, which is consistent with some observations. We conclude, that in the absence of magnetic pressure, additional dissipation in the outer layer is able to heat up the corona, but the requirement of hydrostatic balance with the disk, puts strong limit on the coronal optical thickness. This limit is independent on the accretion disk parameters, i.e. its accretion rate and the mass of the central black hole. In this context, the X-ray observations of optically thick ($\tau_{\rm cor}$$>$ 5), warm coronae have a strong implication for the disk/corona system: either strong magnetic fields or vertical outflows are required to stabilise the system. What is more, the simple conditions discussed in this paper are the minimum requirements for the existence of the thick corona, and further modelling of the disk/corona interaction may likely impose even more stringent constraints for the existence of such medium. \begin{acknowledgements} This research was conducted within the scope of the HECOLS International Associated Laboratory, supported in part by the Polish NCN grant DEC-2013/08/M/ST9/00664. AR and BC were supported by NCN grants No. 2011/03/B/ST9/03281, 2013/10/M/ST9/00729, and by Ministry of Science and Higher Education grant W30/7.PR/2013. They have received funding from the European Union Seventh Framework Program (FP7/2007-2013) under grant agreement No.312789. This research has also received fundings from PNHE in France, and from the french Research National Agency: CHAOS project ANR-12-BS05- 0009 (http://www.chaos-project.fr). JM and POP also acknowledge fundings from CNRS/PICS. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2015-04-15T02:07:19", "yymm": "1504", "arxiv_id": "1504.03160", "language": "en", "url": "https://arxiv.org/abs/1504.03160" }
\section{Introduction} \label{sec:intro} The nearby (\textasciitilde 9 Mpc) dwarf starburst galaxy Henize 2--10, exhibits intense star formation \cite[e.g.,][]{alle76}, while in the center of the galaxy, an X-ray point source \citep{kobu10he2} and relatively luminous radio point source \citep{kobu99, john03he2} were found to be co-spatial, suggesting the existence of an accreting low-mass active galactic nucleus (AGN) with black hole (BH) of mass \textasciitilde $10^6 \ensuremath{M_{\sun}}$ \citep{rein11he2, rein12he2}. This represented the first possible detection of an AGN in a dwarf starburst galaxy. Even if a large fraction of dwarf galaxies host massive BHs, they are challenging to detect as AGNs because the AGN emission is faint and its signatures can be swamped by surrounding star formation \citep[e.g.,][]{rein13dwarf}; X-ray observations can be one of the most effective methods for identifying AGN in dwarf galaxies \citep[e.g.,][]{rein14mrk709,lemo15xdwarf,secr15xdwarf}. If the existence of an AGN in He 2--10 is confirmed, it would serve as one of the best possible analogs for BH and galaxy growth in the early history of the Universe \cite[e.g.,][]{rein11he2}. Most bulge-dominated galaxies contain supermassive BHs, however, the process by which the orignal ``seed'' BHs formed remains poorly constrained \citep[e.g.,][]{john07firstbh,volo10bhform}. \begin{deluxetable*}{cccccc} \tabletypesize{\small} \tablecolumns{6} \tablecaption{Observation Details \label{table:obs}} \tablehead{& & & & & \colhead{Net counts}\\ \colhead{Instrument} & \colhead{Detector} & \colhead{Obs.\ ID} & \colhead{Date} & \colhead{Exposure in ks (clean)} & \colhead{0.5--2 keV (2--8 keV)}} \startdata \emph{Chandra} & ACIS-S & 2075 & 2001-03-23 & 20.0 (19.7) & \phn 983 (174) \\ \emph{XMM-Newton} & pn & 0202650101 & 2004-05-27 & 42.0 (29.3) & 3216 (234) \\ \emph{XMM-Newton} & pn & 0672800101 & 2011-05-11 & 26.9 (17.6) & 1863 \phn (92) \\ \emph{ASCA} & SIS & 65017000 & 1997-11-30 & 39.8 (22.4) & \phn 197 \phn (52) \enddata \tablecomments{The details of each of the observations used in this paper. Observed net counts after background subtraction are listed for the 0.5--2 and 2--8 keV bands (see \S~\ref{sec:reduction} for details of source extraction and background analysis). Exposure times listed are for the total exposure and the net exposure after cleaning for flares. We focus on our analysis on the more sensitive {\em Chandra} and {\em XMM} observations, but include the earlier {\em ASCA} observation as a check on the baseline flux level for the source.} \end{deluxetable*} Currently, the available observational evidence for the central compact sources in He 2--10 favors its interpretation as a supermassive BH. The majority of its radio emission originates from a region $< 3$ pc $ \times 1$ pc in size \citep{rein12he2} and is consistent with being spatially coincident with the {\it Chandra}\ hard X-ray point source at the dynamical center of the galaxy \citep{rein11he2}. Assuming that the radio and X-ray emission are produced by a BH, a comparison with the BH fundamental plane \citep{merl03bhplane} suggests that the mass is $\sim$10$^6$ \ensuremath{M_{\sun}}\ \citep{rein11he2}. Alternatively, the X-ray emission could in principle come from an ultraluminous X-ray source that is powered by a stellar-mass BH \citep{robe07ulx}. However this cannot account for the oberved compact radio flux \citep[e.g.][]{midd13ulx, wolt14ulx}, although we note that previous radio and X-ray observations are not strictly simultaneous. We can most likely rule out supernova (SN) remnants as the cause of the X-ray emission; there are no massive star-forming clusters coincident with the compact radio emission, rendering this scenario somewhat implausible, but not impossible. To more robustly constrain the nature of the compact central source in He 2--10 and to better constrain its mass, it is important to understand how its X-ray luminosity varies with time. This is the goal of the present paper. The original evidence for the AGN in He 2--10 came in part from analysis of the spectrally hard, resolved point source in the 2001 {\it Chandra}\ observations. Here, we analyze data taken from {\it Chandra}\ (2001), {\it XMM-Newton}\ (2004 and 2011), and {\it ASCA}\ (1997) to obtain spectra at each epoch and a resulting measure of the long-term variability of the hard nuclear source. The temporal baseline of the observations is sufficient to probe variability on timescales reasonable for an intermediate-mass BH or low-mass AGN, as shown by the small known sampling of these rare objects \citep{dewa08}. \section{Data Reduction and Spectral Analysis} \label{sec:reduction} In this section we describe the spectral extraction process for each observation, with a focus on details of the sophisticated background modeling, which was required for the {\it XMM-Newton}\ and {\it ASCA}\ observations. All spectral analyses are performed using {XSPEC v12.8.0} \citep{arna96xsp}. See Table~\ref{table:obs} for the details of the observations. We will focus primarily on the more sensitive observations taken by {\em Chandra} in 2001 and {\em XMM} in 2004 and 2011, but will also include a discussion of the 1997 {\it ASCA}\ observation that provides a (less sensitive) baseline measurement of the source flux. Throughout the paper, uncertainties on X-ray measurements (fluxes, luminosities, and spectral parameters) represent 90\% confidence intervals. \subsection{{\it Chandra}} \label{sec:chandra} The nuclear X-ray point source in He 2--10 was discovered in the 20 ks ACIS-S {\it Chandra}\ observation of He 2--10 on 23 March 2001 \citep{rein11he2}. The pipeline-reduced data from this observation (see Figure \ref{fig:area}) were obtained from the HEASARC public archive and were reduced using the analysis tools of CIAO 4.5. Time filtering yielded 19.7 ks of cleaned exposure. For {\it Chandra}\ we defined two separate source extraction regions: a small region of radius 2.25\arcsec\ at the nuclear region of the galaxy was used to measure the flux of the hard point source, and a larger region of radius 16.8\arcsec\ was used to include the soft diffuse X-ray component. In either case, the background region was comprised of an annulus of outer radius 58 arcsec, excluding the larger (diffuse) source region and another point source about 45 arcsec away from the source. We note that the background level above 2 keV in the {\em Chandra} observation was much less prominent than in the {\it XMM}\ and {\it ASCA}\ observations. The spectrum, as well as the response and ancilliary files, were extracted using the {\tt specextract} command. Due to the high signal-to-noise ratio and low background, we used CIAO's built-in background subtraction rather than simultaneously fitting source and background spectra, as for the {\it XMM}\ and {\it ASCA}\ observations. \begin{figure*} \epsscale{0.7} \plotone{f1.pdf} \caption{{\it Chandra}\ images in the soft (0.5--2 keV) and hard (3--8 keV) X-rays; energy ranges are chosen to clearly separate the soft diffuse component from the hard compact nuclear emission. The {\it XMM}\ source extraction region, as well as the two {\it Chandra}\ regions are superimposed. The excellent angular resolution of {\it Chandra}\ allows for clear imaging of the X-ray morphology. Widespread diffuse emission from star formation is seen in soft X-rays, while the central nuclear source is clearly seen at hard X-rays. {\it XMM-Newton}\ has significantly poorer angular resolution, thus a larger extraction region was required in order to include sufficient source flux.} \label{fig:area} \end{figure*} For all the observations presented in this paper, the source spectrum is described with a model consisting of the power-law component to model the hard nuclear source and optically-thin thermal (VMEKAL; \citealt{mewe85mekal, lied95mekal}x) component to model the diffuse emission, with abundance values fixed to those obtained in \citet{kobu10he2} (0.78 for light elements and 0.29 for heavy elements), and allowing the normalization and temperature to float. We use a VMEKAL to match the spectral analysis of \citet{rein11he2}, but note that fitting with an APEC model \citep{smit01} has no significant effect on the results for the hard component. Following \citet{kobu10he2}, we include Galactic absorption $N_{\rm H,Gal} = 5\times10^{20}$ cm$^{-2}$ on all components, and local absorption for the VMEKAL component ($N_{\rm H,Diffuse} = 9.7\times10^{20}$ cm$^{-2}$). The absorption on the power law component ($N_{\rm H,Nuclear}$) was allowed to float. Absorption is computed using the tbabs model \citep{wilm00tbabs}. In XSPEC notation, the source model is given by: \begin{equation} {\tt Source} = \textrm{tbabs$_{\rm Gal}$(tbabs$_{\rm Diffuse}$*VMEKAL + tbabs$_{\rm Nuclear}$*powerlaw)} \label{eqn:source} \end{equation} We first fit this model to the spectrum from the nuclear (2.25\arcsec\ radius) source region, to obtain the strongest possible constraint on the emission from the unresolved hard component. With relatively few counts at high energies we obtain poor constraints on the hard X-ray photon index, so this is fixed to the canonical AGN value of $\Gamma=1.8$. (Repeating the fit for values of intrinsic $\Gamma$ varying within a range typical for AGN, $1.4<\Gamma<2.2$ \citep{tozz06}, produces no significant change in the unabsorbed flux.) The fit yields $N_{\rm H} = (4.61^{+1.67}_{-1.26}) \times 10^{22}$ cm$^{-2}$, indicating substantial absorption. We next fit the same model to the spectrum from the extended source region (a circle centered on the nuclear region with 16.8\arcsec\ radius), but fix the $N_{\rm H}$ value on the power law component to that obtained for the nuclear spectrum. We obtain a consistent and nearly identical flux for the hard power-law component between the extended and nuclear regions, but a substantially brighter diffuse (VMEKAL) component in the extended region, with best-fit $k_T\ = 0.65\pm0.03$ keV. This confirms that for an even larger extraction region such as those used for {\it XMM}\ and {\it ASCA}, the hard spectral component can be associated with the compact nuclear source. The best-fit fluxes and spectral parameters are given in Table \ref{table:params}. We quote intrinsic (unabsorbed) fluxes and luminosities for the hard nuclear component, and observed (absorbed) fluxes for the soft diffuse component. For direct comparison with the {\em XMM} analysis, we have also extracted a {\em Chandra} spectrum with somewhat larger radius of 36\arcsec, corresponding to the {\em XMM} source region described in the next section (Figure~\ref{fig:area}). Using this larger source region has no significant effect on the spectral parameters. \subsection{XMM-Newton} \label{sec:xmm} We use two subsequent observations of He 2--10 by {\it XMM}\ to constrain the long-term variability of the source. The {\it XMM}\ observations on 27 May 2004 and 11 May 2011 have exposure times of 42 and 27 ks, respectively. For both {\it XMM}\ observations, we reduced, cleaned, and extracted spectra from all three CCD cameras: pn, MOS1, and MOS2. After spectral extraction and analysis, the MOS1 and MOS2 data yielded significantly lower signal-to-noise ratios at energies $>$2 keV compared to the pn detector, so that no useful constraints were obtained on the hard emission from the nuclear point source. In what follows we therefore focus on results from the pn. The source extraction regions were 36\arcsec\ in radius, chosen to provide a balance between extracting as many counts as possible from the source and minimizing the background. (As a check, we have repeated the analysis using a smaller source region of 25\arcsec\ radius, and obtain essentially identical results with marginally larger uncertainties.) The {\it Chandra}\ images (Figure~\ref{fig:area}) show that the extent of both the nuclear and diffuse components are substantially smaller than the 36\arcsec\ extraction region, such that they can both be considered as approximate point sources for the {\it XMM}\ analysis. In both data sets the source region was on-axis and did not lie on any chip gaps. We extracted pn spectra for counts in the energy range 0.2--15 keV and with event patterns 0--4. Response files were produced from the {\it XMM-Newton}\ Current Calibration Files corresponding to the time of this observation. The source ARF is calculated including a correction for photons falling outside the extraction region. This energy-encircled fraction (EEF) varies with energy but is $\approx$85\% at 5 keV. The source spectrum is described with the same model as for {\em Chandra}, consisting of VMEKAL and power law components (Equation~\ref{eqn:source}). To maximize the number of counts in the background spectra and thus achieve the highest possible signal-to-noise ratio, we extracted the background spectrum from a large annulus of outer radius 3 arcmin around the source. Based on a number of trials, the 3 arcmin annulus was determined to provide the optimum number of background counts without needing to account for the variation in background flux at larger off-axis angles. Using the SAS command {\tt edetect\_chain}, 5 sources in the pn field of view were detected and subsequently excluded from the background region. With these sources excluded, the background region is $\approx$23 arcmin$^2$ in area, or 20 times larger than the source region. Because the background emission is extended in nature, the ARF for the background region did not include an EEF correction. The pn background spectrum was fitted with two components. The instrumental background is modeled by a power law continuum, plus Gaussian emission lines caused by fluorescence (Al K-$\alpha$ at 1.5 keV and CuNi K-$\alpha$ at 8.5 keV \citep{cart07xmm}. Because this instrumental background is produced internally to the detector and is not affected by the mirror response, in modeling the observed counts it was was convolved with an RMF but not multiplied by an ARF. The background line energies were determined from fitting each line individually, and then fixed for the full spectral analysis, while the intrinsic line widths are fixed to be consistent with zero. The sky background component is dominated by the diffuse soft cosmic X-ray background (CXB), which can be modeled as thin-thermal emission \citep{hick06a, hick07b}. We used an APEC model for this component, to match the spectral shape obtained by \citet{hick06a} in fitting the unresolved CXB spectrum in the {\it Chandra}\ Deep Fields, fixing $kT=0.17$ keV and allowing the normalization to float. The hard ($>2$ keV) CXB can be described as a power law owing to the summed emission from a large number of AGN \citep[e.g.,][]{hick06a}. We did not include this as a separate component here, as the average expected numbers of counts is $<$2\% of the instrumental background, so its small contribution to the total background flux can be effectively accounted for in our modeling of the instrumental background. The full model for the total observed emission in the {\it XMM}\ source region is: \begin{equation} \label{eqn:xmm} {\tt Data} = {\tt Source} + {\tt Instrumental\; BG} + {\tt Sky\; BG}. \end{equation} The source spectrum is modeled by an absorbed VMEKAL and power law (Equation~\ref{eqn:source}), while the background components are modeled, in XSPEC notation, as: \begin{equation} \label{eqn:instbg} {\tt Instrumental\;BG} = \textrm{powerlaw + gauss + gauss + gauss} \\ \end{equation} and \begin{equation} \label{eqn:skybg} {\tt Sky\; BG} = \textrm{APEC}, \end{equation} where ${\tt Instrumental\; BG}$ is convolved with the RMF only, while ${\tt Source}$ and ${\tt Sky\; BG}$ are convolved by the RMF and multiplied by the ARF. \begin{figure*} \epsscale{0.7} \subfigure{ \includegraphics[width=0.5\textwidth]{f2a.pdf}} \subfigure{ \includegraphics[width=0.5\textwidth]{f2b.pdf}} \subfigure{ \includegraphics[width=0.5\textwidth]{f2c.pdf}} \subfigure{ \includegraphics[width=0.5\textwidth]{f2d.pdf}} \caption{X-ray spectra including model fits and residuals, for four X-ray observations of He 2--10. Each component of the model fitted to the data are shown as dotted lines: the diffuse VMEKAL component (red) and nuclear power law (blue). For the {\it Chandra}\ observation (a) the background was subtracted before spectral fitting, while for the {\it XMM}\ (b), (c) and {\it ASCA}\ (d) observations, the background spectra are fitted by a model(shown by the gray lines) simultaneously with fitting of the observed source spectrum. There is a strong and significant detection of the hard nuclear power law component in the 2001 {\it Chandra}\ (a) observation (clearly visible as a point source in Figure~\ref{fig:area}). The hard component is significantly weaker in the {\em XMM} observations (b), (c) indicating variability by approximately an order of magnitude. The hard component is also detected, at lower significance, in the 1997 {\it ASCA}\ spectrum (d). \label{fig:spectra}} \end{figure*} As discussed below, the fluxes of the nuclear power law component in the {\em XMM} observations are significantly smaller than that observed for {\em Chandra}. To most accurately extract the weak hard X-ray signal from the significant background, we modeled the data by simultaneously fitting the source and background spectra, using the model given in Equation~\ref{eqn:xmm}. We account for the differences in area between the source and background regions by setting the {\tt AREASCAL} parameter on the background spectrum. The spectral parameters of the instrumental and sky backgrounds were fixed to be equal for the source and background spectra. The source component is also included in the background spectrum, but multiplied by a factor $5\times10^{-3}$ to approximately model the flux scattered outside the source region into the background region. (The scaling factor accounts for both the energy encircled fraction and the relative area of the source and background regions; the ultimate fit parameters are insensitive to the precise value of this factor.) Because the soft (VMEKAL) emission from the source is diffuse (with a diameter of $\approx$5\arcsec or 200 pc), and thus a light-crossing time much longer than the separation in time between observations, it should not be observed to vary in our data. Therefore, to maximize the statistical power of our modeling, we fitted the 2004 and 2011 {\em XMM} spectra simultaneously, tying the temperatures and normalizations of the VMEKAL and APEC components between the two data sets. We allowed the the nuclear power-law normalization to float, along with the instrumental background parameters (the particle background in the detectors should not be perfectly constant for the duration of the mission). We thus performed a simultaneous fit to four spectra: source and background spectra from each of the 2004 and 2011. The results of the spectral fitting are shown in Figure~\ref{fig:spectra} and listed in Table~\ref{table:params}. We observe a clear decrease in the flux of the hard nuclear component between the 2001 {\em Chandra} and 2004 {\em XMM} observations. This is demonstrated in Figure~\ref{fig:compare}, in which we show the 2004 {\em XMM} pn spectrum fitted with the 2001 {\em Chandra} best-fit model, with no model for the {\em XMM} background included. This shows that the hard source flux has dropped dramatically from the {\em Chandra} level, even before accounting for the {\em XMM} background. We observe a further, less significant decrease between the 2004 and 2011 {\it XMM}\ observations, while the flux of the diffuse (VMEKAL) component is consistent with no variation from the {\em Chandra} observation. We also observe evidence for a decrease in absorption on the nuclear component, with $N_{\rm H,Nuclear}$ consistent with zero. The best-fit $kT$ of the VMEKAL component for {\em XMM} is close but not fully statistically consistent with the {\em Chandra} data ($0.58^{+0.02}_{-0.03}$ keV compared to $0.65\pm0.03$ keV). Fixing this temperature to the {\em Chandra} value has a negligible effect on the flux of the diffuse component, but decreases the flux of the hard nuclear component by $\approx$15\%. This results in an even larger observed drop in flux compared to the {\em Chandra} data; in the following discussion we will conservatively consider the smaller change in flux obtained when the VMEKAL $kT$ allowed to float for {\it XMM}. The flux of the APEC component, representing the soft diffuse CXB, corresponds to a 0.5--2 keV surface brightness of $(2.9\pm0.5)\times10^{-12}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$, similar to the soft background intensity obtained in the {\em Chandra} Deep Fields \citep{hick06a}. The overall implications of the spectral fitting results are discussed in \S~\ref{sec:results}. \begin{figure} \epsscale{1.1} \plotone{f3.pdf} \caption{Spectrum of the 2004 {\em XMM} observation, fitted with source spectrum from the 2001 {\em Chandra} observation. This fit includes no subtraction or modeling of the {\em XMM} background. This large excess of the model over the data at high energies clearly illustrates the dramatic decrease in the flux of the hard nuclear component between 2001 and 2004, independent of the methods used to account for the {\em XMM} background. \label{fig:compare}} \end{figure} One potential uncertainty in our spectral analysis arises from the implicit assumption that the instrumental and sky background in the source region has the same surface brightness to the emission in the background region. It is possible that fluctuations in the background could cause this assumption to be invalid, leading to over- or under-subtraction of the background, particularly at energies $>$2 keV where the background dominates the signal. To directly check the level of possible fluctations, we extracted the 2--10 keV counts in 80 circular regions of 36\arcsec\ radius (equal to the source region) surrounding the source region. We avoid chip gaps and obvious bright sources, noting that there is no bright point source detected within 36\arcsec\ in the 2--8 keV {\em Chandra} image (Figure~\ref{fig:area}). The 2--10 keV counts in these apertures are approximately normally distributed, with mean (dispersion) of 239 (19) and 138 (15) counts for 2004 and 2011, respectively. We conclude that our modeling of the background in the source region is dominated by statistical error rather than any systematic uncertainty due to background fluctuations. \subsection{{\it ASCA}} As a measurement of the baseline level of flux prior to the {\em Chandra} observations, we also utilize observations of He 2--10 from {\it ASCA}\ in 1997. Due to {\it ASCA}'s poor angular resolution and sensitivity relative to {\em Chandra} and {\em XMM}, it provides relatively weak constraints on the source flux, particularly in the hard nuclear component. We will therefore focus our conclusions primarily on the {\em Chandra} and {\em XMM} data, but will utilize the {\em ASCA} data here as a useful check on our conclusions. He 2--10 was observed by {\it ASCA}\ for a total of 39.8 ks on 30 November 1997. We used Xselect v2.4b to remove time intervals of high background for a net exposure of 22.4 ks, and to extract source and background spectra from the cleaned event files. We extracted a spectrum in the energy range between 0.5--10.5 keV, using a source extraction region of 1.45 arcmin in radius. This source region was chosen to contain as much of the source flux as possible (the energy encircled fraction is 50\%) while minimizing the contribution from background. We extracted a background spectrum from a rectangular annulus of area $\approx$5 times that of the source region, located around the source position but excluding the source region. The RMF and ARF response files were generated for each chip using the commands {\tt sisrmg} and {\tt ascaarf}, respectively. Furthermore, the spectra from the two chips SIS0 and SIS1 were added using the HEASOFT command {\tt mathpha}, while the response files were added with area dependent weights using the commands {\tt addarf} and {\tt addrmf}. As discussed in \S~\ref{sec:xmm}, the source ARF includes a correction for the energy encircled fraction, while the background ARF does not. As with {\em XMM}, we modeled the {\it ASCA}\ data by fitting the source and background spectra simultaneously, modeling the scattered flux in the background region assuming an energy-encircled fraction of 0.5. The instrumental background was modeled with a power law plus five Gaussian emission lines; three were included in the model as narrow fluorescence lines stemming from the device itself at 6.5 keV, 7.5 keV, and 8.2 keV, which are the Fe and Ni K-$\alpha$ lines and the Ni K-$\beta$ line.\footnote{https://heasarc.gsfc.nasa.gov/docs/asca/newsletters/sis\_back2.html} Another Gaussian at 3.3 keV of unknown origin was introduced to fit a feature of the background spectrum, while the final Gaussian was a broad line at 11 keV that modeled the internal background above 7 keV to account for the steepness of the power law. We utilized the same source and sky background models as for {\it XMM}\ (Equations \ref{eqn:source} and \ref{eqn:skybg}). Because they cannot be well-constrained owing to the poor photon statistics, the surface brightness of the APEC component and the VMEKAL temperature are fixed to the values obtained with {\it XMM}, and the photon index of the power law is again fixed to $\Gamma=1.8$. (Allowing the VMEKAL temperature to float yields a significantly higher $kT\approx0.9$, inconsistent with the {\it XMM}\ and {\it Chandra}\ results, but has a negligible effect on the total fluxes of the diffuse and nuclear components.) The results of the {\it ASCA}\ spectral fitting are shown in Figure~\ref{fig:spectra} and listed in Table~\ref{table:params}. We obtain a significant detection of both the diffuse and nuclear spectral components, although with significantly larger uncertainties than in the {\it Chandra}\ and {\it XMM}\ data. In contrast to the {\it XMM}\ observations, the fit yields significant nuclear absorption consistent with the {\em Chandra} value, although the precise value of $N_{\rm H}$ is poorly constrained. \begin{deluxetable*}{cccccccc} \tabletypesize{\scriptsize} \tablecolumns{10} \tablecaption{Spectral fitting results} \tablehead{ & \multicolumn{2}{c}{VMEKAL (Diffuse)} & \multicolumn{4}{c}{Power Law (Nuclear)} & \\ \colhead{Observation} & \colhead{$kT$} & \colhead{$F_{\rm 0.5-3\; keV}$} & \colhead{$N_{\rm H,Nuclear}$} & \colhead{$\Gamma$} & \colhead{$F_{\rm 2-10\;keV}$} & \colhead{$L_{\rm 2-10\;keV}$\tablenotemark{a}} & \colhead{$\chi^2_\nu$ (d.o.f.)} \\ & (keV) & ($10^{-13}$ erg cm$^{-2}$ s$^{-1}$) & ($10^{22}$ cm$^{-2}$) & & ($10^{-13}$ erg cm$^{-2}$ s$^{-1}$) & $ (10^{39}$ erg s$^{-1}$)} \startdata \emph{Chandra}\tablenotemark{b} (2001) & $0.65\pm0.03$ & $2.06\pm0.11$ & $4.61^{+1.67}_{-1.26}$ & [1.8] & $3.28^{+0.73}_{-0.64}$ & $3.18^{+0.71}_{-0.62}$ & 1.61\phn (46)\\ [1ex] \emph{XMM}\tablenotemark{c} (2004) & $0.58^{+0.02}_{-0.03}$ & $1.90^{+0.08}_{-0.05}$ & $<0.05$ & [1.8] & $0.68^{+0.08}_{-0.07}$ & $0.66^{+0.08}_{-0.07}$ & 1.20 (661)\\ [1ex] \emph{XMM}\tablenotemark{c} (2011) & $0.58^{+0.02}_{-0.03}$ & $1.90^{+0.08}_{-0.05}$ & $<0.05$ & [1.8] & $0.44^{+0.09}_{-0.10}$ & $0.43\pm0.09$ & 1.20 (661)\\ \emph{ASCA} (1997) & [0.65] & $2.89^{+0.89}_{-1.02}$ & $0.30^{+3.83}_{-0.30}$ & [1.8] & $1.79^{+1.66}_{-0.87}$ & $1.73^{+1.61}_{-0.84}$ & 0.80\phn (70 \enddata \tablecomments{Best-fit spectral parameters obtained from modeling of the four X-ray spectra. The (unabsorbed) 2--10 keV fluxes were calculated based on the nuclear (power law) component, while the (observed) 0.5--3 keV fluxes correspond to the diffuse (VMEKAL) component. Both components are modified by Galactic absorption with column density fixed at $N_{\rm H,Gal} = 9\times10^{20}$ cm$^{-2}$, and the VMEKAL component is absorbed by an additional component with column density fixed at $N_{\rm H,Diffuse} = 9.7\times10^{20}$ cm$^{-2}$. Parameters in the table that are fixed in the fits are identified with brackets. Uncertainties represent 90\% confidence intervals.} \tablenotetext{a}{~Luminosity values were calculated assuming a distance of 9 Mpc to He 2--10.} \tablenotetext{b}{Parameters in the {\em Chandra} fits for the nuclear and diffuse components are determined from the fit to the nuclear and extended source regions, respectively, as described in \S~\ref{sec:chandra}.} \tablenotetext{c}{The spectra for the two {\em XMM} observations are fitted simultaneously, with parameters for the diffuse (MEKAL) component tied between the two observations, as described in \S~\ref{sec:xmm}. \label{table:params}} \end{deluxetable*} \section{Results and discussion} \label{sec:results} The long-term X-ray light curve of He 2--10, showing variations in the VMEKAL and power law components, are shown in Figure ~\ref{fig:crv}. The diffuse component shows no significant variability over the four observations, as expected for emission from a large-scale diffuse plasma. In contrast, it is immediately clear that there is significant variability in the hard power law component. The luminosity of the nuclear source decreased significantly between the 2001 {\em Chandra} and 2004 {\em XMM} observations, and by approximately an order of magnitude between 2001 and the 2011 {\it XMM}\ observation. (We note that our overall conclusions are unchanged if we use observed hard fluxes, which differ by roughly 30\% from the values shown in the Table 2 for the {\it Chandra}\ and {\it ASCA}\ observation and remain constant for {\it XMM}\ due to the decreased levels of obscuration.) \begin{figure}[b] \epsscale{1.1} \plotone{f4.pdf} \caption{The fourteen year light curve of He 2--10 with 90\% confidence errors shown. The hard nuclear flux (blue) declines between the 2001 {\it Chandra}\ and 2011 {\it XMM}\ observations by nearly an order of magnitude. In contrast, the soft diffuse flux (red), remains approximately constant between the four observations. } \label{fig:crv} \end{figure} The variation in the light curve of hard spectral component He 2--10 over approximately an order of magnitude in $L_X$ confirms that this emission is the result of a single object, rather than several separate sources. This allows us to perform comparisons to other individual astrophysical sources. One class of object that can have similar X-ray and luminosities and amplitudes of variability is SNe (see \citealt{dwar12} for a compilation of published SN X-ray light curves). However, a SN interpretation for the nuclear source is inconsistent with the radio properties. The radio flux of He 2--10 has been measured at 5 GHz with the VLA in 1994 ($0.89\pm0.18$ mJy; \citealt{kobu99}) and 2004 ($0.86\pm0.02$ mJy; \citealt{rein11he2}), implying no significant change in the radio flux with time. Long Baseline Array measurements in 2011 at 1.4 GHz yield a flux of $0.98\pm0.21$ mJy in a compact source on $\approx$1 pc spatial scales; assuming a typical radio spectral index, this implies that roughly half of the observed 5 GHz flux comes from the central compact source \citep{rein12he2}. High Sensitivity Array observations in 2005 did not detect the compact source on extremely small ($\sim$0.1 pc) spatial scales \citep{ulv07wrradio}, placing a lower limit on its spatial extent. This rules out out the presence of a single very young SN, however the total radio luminosity of the source would imply that any single SN must be at most decades old \citep{fene10snm82}. With these constraints in mind, we test whether the nuclear source in He 2--10 is consistent with a (relatively) evolved SN explosion, by comparing its (unabsorbed) soft X-ray and 5 GHz radio light curves to a sample of 7 SNe that have both radio and X-ray measurements in the compilations of \citet{dwar12} and \citet{weil02}. Assuming that the nuclear source in He 2--10 has a relatively constant 5 GHz flux between 1994 and 2004, and conservatively assigning all the 5 GHz flux observed with the VLA ($\approx$0.9 mJy) to the central compact component, we find that the ratio of X-ray (0.5--2 keV) to radio ($\nu F_\nu$ at 5 GHz) for He 2--10 is $\sim$$5\times10^{3}$. This is more than an order of magnitude larger than the typical X-ray to radio flux ratio for X-ray-detected SNe at ages $>$1 year, and 2.5 times larger than the most extreme observed values, in SNe 1980K and 1970G. Further, the detection of the nuclear hard X-ray component in the {\it ASCA}\ observation indicates that the X-ray light curve rises or remains constant, and then declines sharply over a few years. This behavior is unusual for SNe; of the eight SN X-ray light curves in the compilation of \citet{dwar12} that extend beyond ten years, none shows a similar sudden, rapid decline on these time scales. We also note that the X-ray to radio flux ratio is consistently at least 2--3 orders of magnitude {\em smaller} than the typical ratio of X-ray to compact radio flux for ULXs, for which the compact sources tend to be weak or undetected in the radio \citep[e.g.,][]{midd13ulx, wolt14ulx}. Given the observed X-ray light curve and the ratio of X-ray and radio luminosities, we conclude that the observations do not favor a SN or ULX origin for the nuclear source. In contrast, the significant variability of the hard nuclear X-ray source in He 2--10 is consistent with its identification as an accreting massive BH, in comparison to the X-ray variability of known low-mass AGN. The Sdm spiral galaxy NGC 4395, at a distance of only 4 Mpc, contains a BH of mass $3.6 \times 10^5 \ensuremath{M_{\sun}}$, whose hard component, as is shown in \citet{king13}, can vary by a factor of 5 on a timescale of just one day. The nearby edge-on Seyfert 2 galaxy NGC 4945, with a BH mass obtained through observations of its H$_2$0 megamaser of $\approx 10^6 \ensuremath{M_{\sun}}$, shows intrinsic variability (measured at $>$8 keV by {\em RXTE} and {\em Swift}/BAT) of at least an order of magnitude on timescales of days to weeks \citep[e.g.,][]{muel04ngc4945,mari12ngc4945}. The AGN in the nearby Seyfert 1 galaxy NGC 4051 ($M_{\rm BH} = 1.73 \times 10^6 \ensuremath{M_{\sun}}$; \citealt{denn09}) has been observed to vary in X-ray luminosity by more than an order of magnitude over $\sim$\,year timescales \citep{uttl99}. This limited survey confirms that X-ray variability over a large dynamic range on timescales of years is not uncommon among relatively low-mass AGNs. Therefore, the decreased observed luminosity after the {\it Chandra}\ observations could either be due to short timescale fluctuations occurring precisely at the time of the observation, as in NGC 4945, or part of a general trend of long timescale variability, similar to objects like NGC 4051. \acknowledgements{ Support for A.E.R.\ was provided by NASA through the Einstein Fellowship Program, grant PF1-120086, and the Hubble Fellowship Program. R.C.H.\ acknowledges support from the Dartmouth College Class of 1962 Faculty Fellowship. G.R.S.\ acknowledges support from an NSERC Discovery Grant. This research has made use of data, software and/or web tools obtained from NASA's High Energy Astrophysics Science Archive Research Center (HEASARC), a service of Goddard Space Flight Center and the Smithsonian Astrophysical Observatory.}
{ "timestamp": "2015-04-15T02:00:38", "yymm": "1504", "arxiv_id": "1504.03331", "language": "en", "url": "https://arxiv.org/abs/1504.03331" }
\section{Introduction} One of the most significant products of redshift surveys is a map of large scale structure. This in turn allows us to calculate the velocity field induced by density contrasts over cosmic time. For this we often use the linear approximation that the acceleration of a galaxy does not change much over time and that velocities are not just dimensionally equivalent to acceleration multiplied by the Hubble time, but also proportional to it. Regions of high overdensity are to be avoided when using the linear approximation, as turnaround and virialization follow the rise of galaxy density to high levels. In the era of precision cosmology, when measuring the Hubble Constant to 1\% is our aspiration (Bennett et al 2014, Suyu et al 2012) for a variety of compelling physical reasons, peculiar velocities need to be better measured and calculated by local redshift surveys. The state of the art is illustrated by Lavaux \& Tully (2010) and Magoulas et al (2012). The fact that approximately 70\% of the Universe is dark energy and that dark energy is not physically understood (Bin\'{e}truy 2013) suggests that we should not ignore alternatives to Newton's gravity and Einstein's gravity at scales larger than those of classical GR tests. Modified gravity laws cannot yet be ruled out. In this paper we explore one such gravity law applied to the 2MRS density distribution (Huchra et al 2012), namely Milgrom's Modification of Newtonian Gravity (MOND) (Milgrom 1983). We find that, while well motivated for kpc scales, it predicts a velocity field different from what we have observed, for example in the 6dF Galaxy Survey (Jones et al 2009). The unification of MOND with space expanding on Mpc scales with a scale factor $a$ is a work in progress. Close to 40 years' history of MOND has been reviewed by Sanders (2015) and Bothun (2015). There is a general problem with all attempts to address large scale structure problems within the MONDian framework: the framework does not exist! There is no cosmological MOND theory is the standard answer of MONDian aficionados. Be that as it may, the growth of density inhomogeneities ($\delta \rho/\rho$) in that theory has been studied by Nusser (2002) and Llinares (2008, 2014). We note that the peculiar acceleration due to an overdensity in Newtonian gravity is given by Peacock (1999) $$\dot{\bf u} + 2 \frac{\dot{a}}{a} {\bf u} = - \frac{\bf g} {a} \eqno(1n)$$ \noindent where peculiar velocity {\bf v} = a {\bf u} and by Nusser (2002) as $$\dot{\bf u} + 2 \frac{\dot{a}}{a} {\bf u} = - \surd \frac{3\Omega_m H^2 g_0} {2a} \frac{\bf g_N}{\surd g_N}\eqno(1m)$$ \noindent in the curl-free MONDian case with $\Omega_m$ the present matter density. To avoid ambiguity we write the MOND acceleration parameter a$_0$ as $g_0$. Our purpose in this paper is not to join this development of MOND or TeVeS (Bekenstein 2004) to deal with groups of galaxies or cosmological simulations (Angus et al 2013); rather we wish to motivate the extension of peculiar velocity surveys beyond 6dFGS by illustrating the power of peculiar velocities to investigate both structure and gravity on the largest scales. \section{Implementation} For calculating peculiar velocities from 2MRS we have followed Erdo{\u g}du et al (2006) and used the formulation by Peebles (1980) and Davis et al (2011) in the usual notation with {\bf g(r)} representing the gravitational acceleration at {\bf r} $$ \bf{v(r)} = \frac{{\rm 2\Omega^{4/7}\beta} \bf{g(r)}}{\rm{3H_0\Omega_m}}\eqno (2) $$ where $$\bf{g(r)} = {\rm G\bar{\rho}}\int {\rm dr^{\prime 3}} \frac{\delta\rho^\prime}{\rho^\prime}\frac{\bf{r-r^\prime}}{\bf{|r-r^\prime|}^3} \eqno(3)$$ and $\beta~=~\Omega_m^{4/7}/b$ and $b$ the bias parameter. On Mpc scales we are in the `deep-MOND' regime (Zhao et al 2013) beyond the interpolation formulae between MOND and Newtonian gravity used in the internal dynamics of galaxies, so that the gravitational acceleration under MOND can be written as $$ g_{\rm{MOND}} = \surd g_N \surd g_1 \eqno(4)$$ where $g_N$ is a Newtonian r$^{-2}$ acceleration field and $\surd g_1$ = $\frac{4}{3}(\surd 2 - 1) \surd g_0 $ with $g_0~\sim $ 10$^{-10}$ m/s$^2$. Equation (1) of Zhao et al with $y~>>~1$ yields this definition of $g_1$. Our calculation therefore proceeds by substituting the MONDian acceleration for the Newtonian one in equation (2). The value of $\beta$ is calculated from the bias factor, measured for this sample to be $b$ = 1.48 $\pm$ 0.27 (Beutler et al 2012). Nusser (2014) has pointed out that not only are we assuming the linear approximation in doing this, and thus erring in high density regions, but also we are neglecting velocities generated at early times\footnote{During early cosmic times, all accelerations on all scales are large, so that the Newtonian equations pertain. As time goes by, the gravitational field decreases in amplitude and enters the MOND region. This modification would be $T_N~ g_N ~+~ T_{MOND}~ g_{MOND}$, where $T_N$ is the time spent in the Newtonian regime and $T_{MOND}$ is time spent in the MOND regime. $T_N/T_{MOND}$ depends on the amplitude of the initial fluctuations.}. Such initial peculiar velocities are subject to adiabatic decay, however, over the age of the Universe (Davis et al 2011). \section{Results} In this calculation, and generally in n-body codes, each particle communicates with every other particle. In the MONDian case every grid point that looks at the Shapley supercluster, sees an overdensity not fully attenuated , as the luminosity field is, by r$^{-2}$ and wants to move towards Shapley. The outcome of this is Figures 1 and 2 , which depict the velocity field. In Figure 1 we see a smooth flow with a coherence length as large as the volume. It is quite unlike the observed velocity field, and there is no free parameter to remedy it. \begin{figure}[h] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[clip, angle=-90, width=1.35\textwidth]{demimond.eps} \caption{The MONDian flowfield in the supergalactic plane. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~The SGX and SGY coordinates are in units of Mpc/$h$.} \end{minipage} \hspace{.5 cm} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[clip, angle=-90, width=1.35\textwidth]{cleanplot.eps} \caption{The Newtonian flowfield for comparison with Figure 1. Prominent features are the Great Attractor on the left and the Perseus-Pisces supercluster on the right.} \end{minipage} \end{figure} \begin{figure}[h] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[clip, angle=-90, width=1.0\textwidth]{newmondv.eps} \caption{The distribution of MONDian peculiar velocities in the SGX direction. This figure is for $\beta$ = 0.4 in equation (2), and velocities would scale by a factor of 1.5 for $\beta$ = 0.6.} \end{minipage} \hspace{.5 cm} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[clip, angle=-90, width=1.0\textwidth]{regular.eps} \caption{The distribution of Erdo{\u g}du model peculiar velocities. Again, we used $\beta$ = 0.4.} \end{minipage} \end{figure} In Figures 3 \& 4 we see the predicted velocity distribution functions in the SGX coordinate. These figures are not the problem. There are two free parameters $\beta$ and, to some degree, $g_0$ that can be adjusted to bring the speed everywhere into the range that we observers see in the cosmic microwave background rest frame. By contrast, Figures 2 and 4 for the Newtonian case do resemble the observed velocity field , and can be brought into agreement with it with $\beta~\approx$ 0.6 (Magoulas et al 2012, 2015). Figure 5 shows both the density and velocity fields for standard model cosmological parameters. \begin{figure} \vspace*{ -1 truein} \hspace*{ -1 truein} \includegraphics[scale=1., angle=-90]{plane140.eps} \caption{The flow field in the supergalactic plane in the Newtonian case, superposed on the density field from 2MRS colour coded (red being denser than the mean by a factor of 12 and blue zero density). We are at the origin and the two closest prominent features are inflow into the Great Attractor (towards the upper left) and into Perseus Pisces (towards the lower right). The longest arrows reach 1500 km/s.} \end{figure} \section{Analysis} \begin{figure} \includegraphics[scale=0.65, angle=-90]{fig6.eps \caption{Bulk flow velocity for the MOND case for $\beta$ = 0.4. Reducing g$_0$ by two orders of magnitude gives the dashed line. It improves the MOND prediction but is still far from a fit to 6dFGS data: open circles with error bars. The 2MRS model predictions for $\beta$ = 0.6 are the dot-dashed line.} \end{figure} A formal comparison of the prediction of MOND and 6dFGS observations is made in Figure 6. Here we show bulk flow velocity as a function of scale. To calculate this, we create a large number of spheres of particular radii and average the velocities within each. To reconcile MOND and observations in this plot would require a four order of magnitude change in $g_0$, which would disrupt the agreement between MOND predictions and galaxy rotation curves (Swaters et al 2010). The mismatch between our observations and MOND rules out MOND. As we see below, the standard Erdo{\u g}du r$^{-2}$ model, on the other hand, agrees with the observations within the uncertainties. \begin{figure} \includegraphics[scale=0.65, angle=-90]{qpower.eps} \caption{The velocity angular correlation function for the MOND case (dot-dashed line) for the Newtonian model (dashed line) and for 6dFGS (solid line with error bars). In the 2MRS galaxy separations are measured in Mpc/h, where $h$ is the Hubble constant in units of 100 km/s.} \end{figure} We have also calculated the velocity angular correlation function as follows. For every pair of galaxies in the 6dFGS peculiar velocity sample the angle between the radial peculiar velocities is calculated. Figure 7 shows the probability that this angle $\theta$ is small ($\cos \theta~>$ 0.9) as a function of separation. In MOND small misalignments continue to large galaxy separations. In the Erdo{\u g}du model the fall off is more rapid. Again, the data are most inconsistent with MOND. For galaxy separations between 20/h and 100/h Mpc $\chi^2$ per degree of freedom is over five times larger for MOND than it is for the $r^{-2}$ prediction. Absolute values of $\chi^2$ are hard to calculate exactly because of the expected failure of the linear approximation at separations smaller than 20 Mpc and the non-gaussian probability distributions of 6dFGS peculiar velocities (Springob et al 2014). The coherence length of velocity structure measured as an e-folding scale for this function is 2600 km/s for 6dFGS, 2700 km/s for the Newtonian 2MRS model and 3300 km/sec for the MOND model. \section{Conclusions} \noindent Peculiar velocities are not a unique probe of modified gravity at the 10 Mpc scale. Weak lensing coupled with galaxy redshifts also provides a good constraint (Reyes et al 2010). Focussing on peculiar velocities, however, we conclude\\ (1) MOND predicts a velocity field overwhelmingly dominated by the largest overdensities on the largest scales (100 Mpc) that we have tested here. The velocity angular correlation function shows markedly worse agreement with 6dFGS in the MONDian case than in the acceleration $\sim~r^{-2}$ case.\\ (2) Smaller well established features observed in the flow field such as the infall into the Great Attractor (e.g. Lynden Bell et al 1987, Mathewson \& Ford 1994) and into the Perseus Pisces supercluster (e.g. Han \& Mould 1992) are not seen in the MOND flow field.\\ (3) If we consider modified gravities more broadly than MOND, those with accelerations that fall off more slowly than r$^{-2}$ will tend to run into similar problems, but these would need to be statistically tested for a mismatch with peculiar velocity data.\\ (4) The velocity power spectrum (Johnson et al 2014) is a fine basis for such tests. Evidence for more power on large scales than $\Lambda$CDM predicts under the linear approximation and standard gravity is at the 2$\sigma$ level currently (Feldman et al 2011). Larger scale coherence than discussed here is seen (Tully 1989, Tully et al 2014). The relevance of modified gravity to such observations remains to be seen. \acknowledgements This research was supported by the Munich Institute for Astro- and Particle Physics (MIAPP) of the DFG cluster of excellence ``Origin and Structure of the Universe". We thank MIAPP for their hospitality while this contribution was being prepared and Richard Anderson for suggesting it. The 6dFGS is a project of the Australian Astronomical Observatory. We are grateful for grant LP130100286 from the Australian Research Council and to CAASTRO\footnote{http://www.caastro.org} for conference travel support. CAASTRO is the ARC's Centre of Excellence for All Sky Astrophysics. Comments from David Parkinson in connection with the 2nd CAASTRO/CoEPP joint workshop on dark matter are appreciated, as were comments from the referee. \section*{References} \noindent Angus, G et al 2013, MNRAS, 436, 202\\ Beckenstein, J 2004, Phys Rev D {\bf 70} 083509\\ Bennett, C et al 2014, ApJ, 794, 135\\ Beutler, F et al 2012, MNRAS, 423, 3420\\ Bin\'{e}truy, P 2013, AAR, 21, 67\\ Bothun, G 2015, Can J Phys, 93, 139\\ Erdo{\u g}du, P et al 2006, MNRAS, 373, 45\\ Davis, M et al 2011, MNRAS, 413, 2906\\ Feldman, H et al 2010, MNRAS, 407, 2328\\ Han, M \& Mould, J 1992, ApJ, 396, 453\\ Huchra, J et al 2012, ApJS, 199, 26\\ Johnson, A et al 2014, MNRAS, 444, 3926 \\ Jones, DH et al 2009, MNRAS, 399, 683\\ Lavaux, G \& Tully, RB 2010, ApJ, 709, 483\\ Llinares, C 2008, MNRAS, 391, 1778\\ Llinares, C 2014, PhRvD, 89, 4023\\ Lynden-Bell, D et al 1988, ApJ, 326, 19\\ Magoulas, C et al 2012, MNRAS, 427, 245\\ Magoulas, C et al 2015, in preparation\\ Mathewson, D \& Ford, V 1994, ApJ, 434, L39\\ Milgrom, M 1983, ApJ, 270, 365\\ Nusser, A 2002, MNRAS, 331, 909\\ Nusser, A 2014, private communication\\ Peacock, J 1999, {\it Cosmological Physics}, Cambridge University Press\\ Peebles, P 1980, {\it The Large Scale Structure of the Universe}, Princeton University Press\\ Reyes, R et al 2010, Nature, 464, 256\\ Sanders, R 2015, Can J Phys, 93, 126\\ Springob, C et al 2014, MNRAS, 445, 2677\\ Suyu, S et al 2012, astro-ph 1202.4459\\ Swaters, R et al 2010, ApJ, 718, 380\\ Tully, RB 1989, ASSL, 151, 41 \\ Tully, RB et al 2014, Nature, 513, 71\\ Zhao, H et al 2013, A\&A, 557, L3 \end{document}
{ "timestamp": "2015-05-11T02:01:51", "yymm": "1504", "arxiv_id": "1504.03027", "language": "en", "url": "https://arxiv.org/abs/1504.03027" }
\section{Introduction}\label{S:intro} Cooperation between autonomous vehicles has shown promising advantages in terms of robustness, adaptivity, reconfigurability, and scalability. A prevalent technique for formation control is MPC for its inherent ability to handle constraints and uncertainty. Dunbar et al \cite{Dunbar2012} considered distributed NMPC for synchronization of agents by broadcasting state error trajectories to the immediate neighbors. A generalized framework for distributed NMPC for cooperative control is proposed in \cite{Allgower2012}, where asymptotic stability is ensured by terminal constraint set. A framework for quasi-parallel NMPC without restriction of terminal set, extended to the multi-agent case recently is shown to be asymptotically stable \cite{Pannek2013}. Distributed NMPC was considered for a group of strongly connected agents receiving delayed input from their neighbors in \cite{Franco2008}-\cite{Chao2012}. The delayed information is projected in the prediction horizon using either a \textit{time-based forward forgetting-factor} or by \textit{linear recurrence}, respectively. Collision avoidance (CA) within MPC framework is well studied for linear systems \cite{Casavola2014}, but similar work in nonlinear MPC setting is still rare. CA among multiple vehicles is achieved by adding a repelling potential field to local NMPC cost function and transmitting the entire planned trajectory \cite{Yoon2007}. Priority strategy for CA in NMPC framework, using neighbors' randomly delayed information has been proposed in \cite{Chao2012}. Hierarchical multi-level control is considered in \cite{Chaloulos2010} by combining potential field with linear MPC, such that only the first step of the trajectory is optimized and linear recursion is used to predict the trajectory over the remaining horizon. Stability proofs are unavailable in most of these CA works. In this note, we address fleet control with collision avoidance of constrained autonomous vehicles subject to limited network throughput and propagation delays by employing distributed NMPC control. Each agent performs local optimization based on an estimate of planned trajectories received from neighboring agents. Since network throughput is assumed limited, exchanged trajectories are compressed using neural networks (NN) as a universal approximator. This property is crucial in our stability analysis, since the impact of estimation error on system dynamics is considered as a bounded non-vanishing (persistent) disturbance. Correction for propagation delays is achieved by time-stamping each communication packet \cite{Srinavas2004}. Collision avoidance is achieved by formulating a new spatially-filtered repelling potential field which is activated in a ``gain-scheduling" type of approach to avoid transforming the problem into mixed-integer nonlinear programming. We prove this distributed control strategy to be ISpS for heterogeneous agents connected in strongly- or weakly-connected network, robust to uncertainty in neighbors' planned trajectories. This algorithm is an improvement over \cite{Franco2008} and contributes to the literature with the following original results: (a) Only an approximation of planned trajectories is transmitted; (b) NN-based data compression algorithm is used in compressing the planned trajectories; (c) collision avoidance by using a spatially filter potential function with rigorous stability proofs; (d) new ISpS and generalized small gain conditions are derived to ensure stability of proposed algorithm; (e) stability results are extended even to weakly connected networks. \section{Preliminaries} Let $L_2$ Euclidean norm be denoted by $|\cdot|$ and let $|\cdot|_{\infty}$ be the $ L_\infty$ norm. The identity function is denoted by $\mathcal{I}: \mathbb{R}\to\mathbb{R}$, functional composition of two functions $\gamma_1$ and $\gamma_2$ by $\gamma_1\circ\gamma_2$ and function inverse of function $\alpha$ by $\alpha^{-1}$. For a set $A\subseteq\mathbb{R}^n$, the point to set distance from $\zeta\in\mathbb{R}$ to $A$ is denoted by $d\left(\zeta,A\right)\triangleq\inf\left\{\mid\eta-\zeta\mid,\eta\in{A}\right\}$. The difference between two sets $A,B\subseteq\mathbb{R}^n$ is denoted by $A\backslash{B}\triangleq\left\{x:x\in{A},x\notin{B}\right\}$. An indicator function of vector $x$ defined as $\mathbf{1}_{x>0} = \{1\, \text{if } x \succ 0,\,0\, \text{otherwise}\}$, where $\succ$ is element-wise inequality. We also use class $\mathcal{K, K_\infty}$ and $\mathcal{KL}$ comparison functions \cite{Sontag1996ISS}. Consider the discrete-time nonlinear system $x_{t+1}=f(x_t,w_t)$ with $f\left(0,0\right)={0}$, where $x_t\in\mathbb{R}^n$ and $w_t\in\mathbb{R}^r$ are state and external input respectively. If $x_t\in\Xi, \forall t > t_0$ whenever $x_{t_0}\in\Xi$ and bounded input $w_t\in{W}$, then $\Xi$ is called a Robust Positively Invariant (RPI) set. Moreover, if $\Xi$ is compact, RPI and contains the origin as an interior point, the system $x_{t+1}=f(x_t,w_t)$ is said to be regionally Input-to-State practically Stable (ISpS) in $\Xi$ for $x_{0}\in{\Xi}$ and $w\in{W}$, if there exists $\mathcal{KL}$-function $\beta$, $\mathcal{K}$-function $\gamma$ and constant $c>0$ such that \begin{eqnarray} \label{E:ISpS_def} \left| {x_t} \right| \le \beta \left( {\left| {{x_0}} \right|,t} \right) + \gamma \left( {\left | w \right |_{\infty}} \right)+c \end{eqnarray} If $c\equiv 0$ , then the system is said to be regionally Input-to-State Stable (ISS) in $\Xi$ \cite{Sontag1996ISS}. Function $V: \mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}_{\ge{0}}$ is an ISpS Lyapunov function in $\Xi$, if for suitable functions $\alpha_{1,2,3},\sigma_{3}\in\cal{K}_{\infty}$, $\sigma_{1,2}\in\cal{K}$ and constants $\bar{c},\bar{\bar{c}}> 0$, there exists a compact and RPI set $\Xi$ and another set $\Omega\subset{\Xi}$ with origin as an interior-point ($\Omega$ is also RPI), such that the following conditions hold, \begin{equation}\label{E:lyapISS_cond1} V( {x_t,w_t} ) \ge {\alpha _1}( {| x_t|} ),\,\,\,\,\,\,\,\,\,\,\forall \,x_t \in \Xi \end{equation} \begin{equation}\label{E:lyapISS_cond2} \begin{array}{l} V\left( {f\left( {{x_t},{w_t}} \right),{w_{t + 1}}} \right) - V\left( {{x_t},{w_t}} \right) \le \\ - {\alpha _2}\left( {\left| {{x_t}} \right|} \right) + {\sigma _1}\left( {\left| {{w_t}} \right|} \right) + {\sigma _2}\left( {\left| {{w_{t + 1}}} \right|} \right) + \bar c,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \forall {\kern 1pt} {x_t} \in \Xi \end{array} \end{equation} \begin{equation}\label{E:lyapISS_cond3} V\left( {x_t,w_t} \right) \le {\alpha _3}\left( {\left| x_t \right|} \right) + {\sigma _3}\left( {\left| w_t \right|} \right)+\bar{\bar{c}},\,\,\,\,\,\,\,\,\,\, \forall \,x_t \in \Omega \end{equation} The relation between ISpS Lyapunov functions and ISpS is shown in Theorem \ref{T:ISpS_gen}. ISS implies ISpS, but converse is not true, since an ISS system with $0-$input, i.e. $w_k = 0, \forall k\ge 0$ implies asymptotic stability to the origin, while for an ISpS system, $0-$input implies asymptotic stability to a compact set (ball of radius $c$) containing the origin. In this paper, the stability analysis will demonstrate that according to the proposed control approach, closed-loop dynamics is ISpS, not ISS, due to uncertainty resulting from data compression. Thus, in this study, $c$ in equation (\ref{E:ISpS_def}) is not zero but function of bounded error in NN estimation. Information exchange among networked vehicles is conveniently modeled by general mixed graphs (directed and undirected edges). An information graph is a set of nodes $A^i$ and edges connecting node pairs $E(A^i,A^j)$. Define connectivity matrix as $\Gamma=[\bar\gamma_{ij}]$, where $\bar\gamma_{ij}>0$ if $(A^i,A^j)\in E$ and $0$ otherwise (by convention $\bar\gamma_{ii}=0$). Neighborhood of a node $A^i$ is $G^i:= \{A^j:\bar\gamma_{ij}>0\}\cup\{A^j:\bar\gamma_{ji}>0\}$. A network is said to be \emph{strongly connected} if there is an undirected path from any node to any other node in the network. In this case, connectivity gain matrix $\Gamma$ is irreducible. A network is said to be weakly connected if there are at least two nodes for which a directed path connecting them does not exist. For weakly connected networks, connectivity gain matrix $\Gamma$ can be reduced to upper block triangular form \cite{Dashkovskiy2010}. Next, we will formulate the distributed multi-agent problem. \subsection{Distributed Multi-Agent NMPC with Collision Avoidance}\label{Sec:DNMPC-CA} Consider a set of $N$ agents $A^i$ each having nonlinear discrete-time dynamics: \begin{eqnarray} \label{E:agentsys} x^i_{t+1}=f^i(x^i_t,u^i_t),\,\,\,\,\,\,\,\,\forall {t}\ge0,\,\,\ i=1,\dots,{N} \end{eqnarray} Local states $x^i_t$ and control inputs $u^i_t$ belong to constrained sets $x_t^i \in {X^i} \subset {\mathbb{R}^{{n^i}}},{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \;u_t^i \in {U^i} \subset {\mathbb{R}^{{m^i}}}$. Agents are decoupled from each other in open loop. On the other hand, closed-loop control takes into account the neighbors' states and therefore couples the dynamics. Let $\tilde{w}^i_t$ be the approximation of trajectories $w^i_t=\{x^j_t\}, \forall j\in{G^i}$ of neighbors of $A^i$, such that $w_t^i \in {W^i} \subset {\mathbb{R}^{{p^i}}}$. For each agent $A^i$, the general finite-horizon cost function is defined as: \begin{equation} \label{E:cost1} \begin{array}{l} J_t^i = \sum\limits_{l = t}^{t + N_p^i - 1} {\left[ {{h^i}\left( {x_l^i,u_l^i} \right) + {q^i}\left( {x_l^i,\tilde w_l^i} \right)} \right]}+h_f^i({x_{t + N_p^i}^i}) \end{array} \end{equation} where $N_p^i$ and $N_c^i$ are prediction and control horizons respectively. Distributed cost \eqref{E:cost1} consists of local transition cost $h_l^i$, local terminal cost $h_f^i$ and interaction cost $q^i_l$, see \cite{Franco2008} for details. We define an agent $A^i$ to be on collision course with at least one other agent if $\sum\limits_{j \in {G^i}} {{{\bf{1}}_{(R_{\min}^i - d^{ij}_k) > 0,\forall t \le k \le (t + N_p^i)}}} >0$, where $R_{\min}$ is the safety zone of an agent and $d^{ij}_k$ is the Euclidean distance between agent $A^i$ and $A^j$. Repelling potential can be formulated as: \begin{eqnarray} \label{E:colav_pot} \Phi _t^i = \sum\limits_{j \in {G^i}} {\frac{{\bar \lambda R_{\min}^i{{ \mathbf{1}}_{(R_{\min}^j - d^{ij}_k) > 0,\forall t \le k \le (t + N_p^i)}}}}{{\sum\limits_{k = t}^{t + N_P^i} {\lambda (d^{ij}_k) d^{ij}_k} }}} {\mkern 1mu} {\mkern 1mu} \end{eqnarray} where $0<\lambda_{\min}\le\lambda(d_{ij})\le\lambda_{\max}$ are positive weights of a filter and are strictly decreasing in their argument, such that $\bar{ \lambda}\triangleq\sum\limits_{k = t}^{t + N_P^i} {\lambda (d^{ij}_k)} $. If at any instant $t \le k \le (t + N_p^i)$ in the prediction horizon, an agent $A^i$ has a feasible trajectory which falls within $R_{\min}^j$ of agent $A^j$, the repelling potential \eqref{E:colav_pot} becomes non-zero. To cater for collision course, cost \eqref{E:cost1} is modified as \begin{equation}\label{E:colav_cost} \acute J_t^i = J_t^i(1 + \Phi _t^i) \end{equation} Strength of potential field \eqref{E:colav_pot} is inversely proportional to the weighted average distance between the two agents ${\bar{d}^{ij}_t=\sum\limits_{k = t}^{t + N_P^i} {\lambda (d^{ij}_k)d^{ij}_k} }/\bar{\lambda}$. The weights $\lambda$, strictly decreasing with $d^{ij}_k$, ensure that the smallest separation between two agents gets the highest weight. On the other hand, taking a simple average (i.e. $\lambda\equiv 1$) or a time-based forgetting factor ($\lambda$ is strictly decreasing with $k$, the time index), results in poor performance in collision avoidance, as trajectories which enter very late in zone $R_{\min}$ (i.e. $R_{\min}^i - d^{ij}_k > 0, k \to t + N_p^i$) have a small repelling potential \eqref{E:colav_pot}, and hence not prevented from very early on. Such strategy results in agents getting very close before they start repelling each other to avoid collision. However, with cost \eqref{E:colav_cost}, trajectories are immediately penalized upon falling within zone $R^j_{\min}$ and are obviously avoided in the NMPC optimization. The indicator function in \eqref{E:colav_pot} acts as a ``gain-scheduled" binary (0-1) variable depending on whether a feasible trajectory falls within $R_{\min}$. We define successful collision avoidance to occur if weighted average distance between the agents on collision course increases i.e. \begin{equation}\label{E:colav_exp_cond} \sum\limits_{k = t}^{t + N_P^i} {\lambda (d^{ij}_k)d^{ij}_k} < \sum\limits_{k = t + 1}^{t + N_P^i + 1} {\lambda (d^{ij}_k)d^{ij}_k} \end{equation} Control sequence $u^i_{t,t+N_p^i}$ consists of $u^i_{t,t+N_c^i-1}$ and $u^i_{t+N_c^i,t+N_p^i-1}$. The latter part is generated by local \emph{auxiliary} control law $u_i^l=k^i_f(x_l^i)$ for ${t\ge{N_c^i}}$, while the former is the distributed optimal control $u^i_{t,t+N_c^i}$ which is the solution of the Problem \ref{P:RHOCP}. Suboptimal $u^i_{t,t+N_c^i-1}$ satisfying all constraints is called \emph{feasible} control. \newtheorem{pbm}{Problem} \begin{pbm}\label{P:RHOCP} At every instant $t\ge{0}$ for each agent, given horizons $N_p^i$ and $N_c^i$, and auxiliary control $k_f^i$, find the optimal control sequence ${u}^{i,\star}_{t,t+N_c^i-1}$, which minimizes distributed finite horizon cost \eqref{E:cost1} (or \eqref{E:colav_cost} for collision avoidance), satisfies state and input constraints and system dynamics \eqref{E:agentsys}, such that the terminal state is constrained to a terminal set, i.e. $x^i_{t+N_p^i}\in{X^i_f}$. In the receding horizon strategy, only the first element of ${u}^{i,\star}_{t,t+N_c^i-1}$ is implemented at each instant, such that the closed loop dynamics becomes \begin{equation}\label{E:sys_cl} x^i_{t+1}=f^i(x^i_t,u^{i,\star}(x^i_t,w^i_t))=\tilde{f}^i(x^i_t,w^i_t) \end{equation} \end{pbm} \subsection{Data Compression}\label{Sec:NN_compression} For cooperation, agents transmit their planned state trajectories, $x_{t,t+N_p^i}^i \in {\mathbb{R}^{{n^i} \times N_p^i}}$, but reception occurs after some delay $\Delta^{ji}$. To reduce packet size, trajectory containing $n^i\times{N^i_p}$ floating points is compressed by approximating it with neural network $\mathcal{N}^i$ of $q^i$ weights and biases, with compression factor of $1-(q^i+\mathrm{overhead}\;\mathrm{size})/({n^i \times N^i_p})$. Overhead size accounts for agent identity $i$, time-stamp ($T_s^i$) and sampling time $T^i$ etc. The leader also communicates formation geometry and way-points to followers. It is assumed that there exists a mechanism for synchronizing clocks, which allows delay $\Delta_{ji}$ to be estimated. NN at $A^i$ is trained using state trajectory as output and $N^i_p$ discrete instants as input. Using sampling rate $T^j$ and prediction horizon $N^j_p$ at $A^j$, re-sampled trajectory $\tilde{w}_t^j \in {W^j} \subset {\mathbb{R}^{{n^j} \times N_p^j}}$ is generated using received neural network $\mathcal{N}^i$. If horizon is sufficiently long, states can be extrapolated with bounded error. If packet is delayed by more than a threshold $\bar \Delta$, the packet is deemed to be lost. Any smooth function $w(t)$ can be approximated arbitrarily closely on a compact set using a NN with appropriate weights and activation functions \cite{Jagannathan2006}. Let $w(\tau)$ be a set of smooth functions, then we can show $\tilde{w}(\tau)=w(\tau)+\xi$, where $\tilde{w}(\tau)$ is approximation of $w(\tau)$ by NN, and $\tau\triangleq{col(t,t\dots{t})}$ is the stack of $t$ vector $n^i$ times and $\xi$ is NN approximation error which is inversely proportional to hidden-layer size $H_L$. Error $\xi^i_t$ in prediction also depends on the delay $\Delta_t^{ij}$ in information received from $A^j$ due to extrapolation of trajectory tail ($\tilde w^{i,t+N_p^i+\Delta^{ij}}_{t+N_p^i}$). If the error (or delay) is greater than an upper bound, i.e. $\xi^i_t>\bar \xi$, a feasible control for avoiding collision may not exist. This means that agents will get too close due to error $\xi^i_t$, such that there is not enough time to maneuver for avoiding collision. Consequently, we assume an upper bound on the permissible delay $\Delta_t^{ij}\le\bar\Delta$, which is the worst case scenario of two agents on a direct collision course at maximum permissible speed and with minimum separation between them, i.e. $\bar \Delta \triangleq {R_{\min }}/{v_{\max }}$. With this conservative (can be relaxed) bound on $\Delta_t^{ij}$, there is always enough time to execute collision avoidance maneuvers. \alglanguage{pseudocode} \begin{algorithm} \caption{DNMPC Algorithm with Collision Avoidance}\label{Alg:dnmpc} \begin{algorithmic}[1] \State {\bf Given} {$A^1$,$A^i\gets x_0^i, d^{h^i},d^{q^i}, g^i $}\Comment{$i=1\triangleq$ Leader, $t=0$}\label{Al:lead_sel_dnmpc} \State {\bf Solve} Problem \ref{Pmb:Terminal_Set_Ctrl_Convex} offline for $Q_f^i$ and $K_f^i$\label{Al:convex_term_ctrl_set} \Procedure{Collision Free Distributed NMPC}{} \State \textbf{Design} Spatially filter potential \eqref{E:colav_weight_condition} \State {\bf Solve} Problem \ref{P:RHOCP} at $A^i$ for $u^{i,\star}_{t,t+N_c^i-1}$ \label{Al:optimize_dnmpc} \State {\bf Train NN} Train Neural network for $x^{i,\star}_{t,t+N_p}$ \label{Al:train_NN_dnmpc} \State {\bf Implement} first element/block of $u^{i,\star}_{t,t+N_c^i-1}$ \label{Al:implement_dnmpc} \State {\bf Transmit/Receive} data packets \label{Al:send} \State{\bf Estimate} time delay $\Delta_{ij}$ \State {\bf Reconstruct} $\tilde{w}_t^{t+Np^i}$ with received NN \Statex {\bf Increment} time by one sample \Comment{$t^i=t^i+T_s^i$} \EndProcedure\Comment{End CF-DNMPC Alg.} \end{algorithmic} \end{algorithm} \section{Stability Analysis }\label{Sec:stab_analysis} We first state an important new result in regional input-to-state practical stability. This general result will form the cornerstone of later development. \newtheorem{thm}{Theorem} \begin{thm}\label{T:ISpS_gen} If system $x_{t+1}=f(x_t,w_t)$ admits an ISpS-Lyapunov function in $\Xi$, then it is regional ISpS and satisfies condition (\ref{E:ISpS_def}), with $\beta (r,s) \triangleq {\alpha _1}^{ - 1} (3\hat\beta (3{\alpha _3}\left( r \right),s))$, $\gamma (s)\triangleq {\alpha _1}^{ - 1}(3(\hat \gamma (3\sum\limits_{i = 1}^3 {{\sigma _i}(s)} ) + \hat \beta (3{\sigma _3}\left( s \right),0)))$ and $c \triangleq {\alpha _1}^{ - 1}(3(\hat \beta (3(\bar{\bar c}+d),0) + {\alpha _1}^{ - 1}\hat \gamma (\mu (3\bar {\bar c})) + {\alpha _1}^{ - 1}\hat \gamma (3\bar c))$, where $\mu$, $\hat{\gamma}\in\mathcal{K}_\infty$ while $\hat{\beta}\in\mathcal{KL}$ ($d$ is defined in proof). \end{thm} \begin{proof} for all ${w_{t,t+1}} \in W$. Since $\Omega$ is RPI, therefore for $x_1\in\Xi\backslash\Omega$ and $x_2\in\Omega$, there exists $d>0$ such that $ V(x_1,w_1) \le V(x_2,w_2) + d$ for $w_{1,2}\in W$. Letting $\bar{\alpha}_3(s)\triangleq\alpha _3(s) + {\sigma _3}(s)+s$, $\underline\alpha_2(s) \triangleq \min ({\alpha_2}(s/3) , {\sigma_3}(s/3) , \mu (s/3))$, $\alpha_4(s)\triangleq\underline\alpha_2(s)\circ\bar\alpha^{-1}_3(s)$, $\hat{w}\triangleq\max(|w_t |_\infty,|w_{t+1}|_\infty)$, $\omega(\hat{w},\bar c,\bar{\bar{c}})\triangleq \sum\limits_{i = 1}^3 {{\sigma _i}(\hat w)}+ \mu (\bar {\bar c}) + \bar c $ and selecting $\rho\in\mathcal{K}_\infty$ such that $(\mathcal{I}-\rho)\in\mathcal{K}_\infty$, we can define a compact set $D\subset\Omega\subset\Xi$ containing the origin: $D\triangleq\{ x|\,\,d(x,d\Omega ) > {d_1},\,\,V({x_t},{w_t}) \le \hat{\gamma}(\omega)\}$, where $\hat{\gamma}\triangleq \alpha_4^{-1}\circ\rho^{-1}$. With these definitions and using steps similar to equations (14)-(17) in proof of Theorem 4.1 of \cite{Franco2008}, we can show that $D$ is RPI. Moreover, $D$ can also be shown to be asymptotically attractive for state starting in $\Xi \backslash D$ using arguments similar to equations (18)-(23) of \cite{Franco2008}. Hence, a state $x_t$ starting in $\Xi$ will enter $\Omega\backslash D$ in finite time, and from there it will enter $D$ in finite time as well, where it shall remain as $D$ is RPI. Using a standard comparison lemma \cite{Jiang2001ISS}, $\exists\hat\beta(r,s)\in\mathcal{KL}$ such that $V\left( {{x_t},{w_t}} \right) \le \max (\hat{\beta} (V\left( {{x_0},{w_0}} \right),t),\hat \gamma (\omega (|{w_t}{|_\infty },\bar c,\bar {\bar c})),\,\forall x_t \in \Xi, w_t\in W$. Using a property for $\mathcal{K}$ functions: $\alpha ({r_1} + {r_2} + {r_3}) \le \alpha (3\max ({r_1},{r_2},{r_3}))\le \alpha (3{r_1})+\alpha (3{r_2})+\alpha(3{r_3})$, we can show that system $x_{t+1}=f(x_t,w_t)$ is regional ISpS in $\Xi, \forall x_t\in\Xi, w_t\in W$. \end{proof} We will now particularize this result for Algorithm \ref{Alg:dnmpc}. Stability is analyzed in two stages. First, individual agents are shown to be ISpS and robust to communication delays and trajectory approximation error in a subset of $X^i$, followed by generalized small gain condition for team stability. \subsection{Stability of Individual Agents without Collision Avoidance} \label{Sec:Stab} Asymptotic stability (ISS) for MPC schemes can be shown in case of additive and vanishing disturbance, but only ultimate boundedness (or ISpS) can be guaranteed in case of non-vanishing (not decaying with state) uncertainties \cite{Limon2006minmaxMPC}. In the proposed approach, the uncertainty in trajectory approximation $\xi$ is non-vanishing and one can only guarantee ISpS. We consider first the stability of individual agent $A^i$ with respect to the information received from other agents, by exploiting Theorem \ref{T:ISpS_gen}. At this stage the interconnections are ignored, and information from neighbors is considered as external input. We assume at this stage that agents generate conflict free trajectories. \newtheorem{assump}{Assumption} \begin{thm}\label{T:ISpS_spec} Let terminal set $X_f^i\subset X^i$ be RPI and let $k_f^i(x_t^i), f^i(x_t^i,k^i_f(x_t^i)), w^i_{t+1}, h^i(x_t^i,u_t^i), q^i(x_t^i,w_t^i)$ $h_f^i(x_t^i,u_t^i)$ be locally Lipschitz with respect to $x_t^i,u_t^i$ and $w_t^i$ in $X^i\times U^i\times W^i$, with the following Lipschitz constants $L^i_{k_f}, L^i_f, L_{gw}, L_{hx}^i, L_{hu}^i, L_{qx}^i, L_{qw}^i$ and $L_{hf}^i$. Moreover, there exist nonlinear bounds $\alpha_{1,f}, \alpha_{2,f}, \underline{r}^i\in\cal{K}_\infty$ such that $\underline{r}^i(|x_t^i|)\le h^i(x_t^i,u_t^i)$ and $\alpha_{1,f}(|x_t^i|)\le{h_f^i(x_t^i)}\le{\alpha_{2,f}(|x_t^i|)}, \forall x_t^i\in{X^i}$. Now, if the neural network trajectory approximation error is bounded $|\tilde{w}_t|\le{|w_t|+\hat{\xi}}$, and the following holds for $x_t^i\in X_f^i$ and $w_t^i\in W^i$ \begin{equation}\label{E:terminal_constraint_condition} h_f^i(f^i(x^i,k_f^i(x^i)))-h_f^i(x^i) \le-h^i(x^i,k_f^i(x^i))-q^i(x^i,\tilde{w}^i)+{\psi^i(|\tilde{w}^i|)} \end{equation} for some $\psi^i\in\cal{K}$, then agent $A^i$ under NMPC optimal $u^{i,\star}$ and terminal $k_f^i(x^i)$ control laws admits ISpS Lyapunov function $V(x^i_t,{w}^i_t,u^i_t)=J^i_t(x^{i,\star}_t,{w}^i_t,u^{i,\star}_{t,t+N_p^i})$ and is therefore ISpS with robust output feasible set $X^i_{MPC}\subseteq X^i$, which is the set of initial states for which the Problem \ref{P:RHOCP} is feasible. \end{thm} \begin{proof} We need to prove that $V(x^i_t,u^i_t,{w}^i_t)=J^i_t(x^{i,\star}_t,u^{i,\star}_{t,t+N_p^i},{w}^i_t)$ is an ISpS Lyapunov function. The lower bound on $V(x^i_t,{w}^i_t)$ is obviously given by $ \underline{r}^i(|x_t^i|)=\alpha^i_1 (|x_t^i|)\le V(x^i_t,{w}^i_t),\,\,\,\ \forall x^i_t\in X^i, w^i_t\in W^i$. Local control $\tilde{u}^i_{t,t+N_c^i-1}={[k_f^i(x_t^i),\dots,k_f^i(x^i_{t+N_c^i-1})]}^T$ is feasible but suboptimal $\forall x^i\in X_f^i$, i.e. $V(x_t^i,\tilde w_t^i) \le J_t^i(x_t^i,\tilde w_t^i,\tilde u_{t,t + N_p^i}^i)$. Using assumptions in Theorem \ref{T:ISpS_spec}, we get $V(x_t^i,w_t^i) \le\alpha _3^i(| {x_t^i}|) + \sigma _3^i(| {w_t^i}|) + \bar {\bar{c}}^i$, where $\alpha_3^i(s)={\alpha^i_{2,f}}(L{{_f^i}^{N_p^i}} s)+\bar{b}^i$, $\sigma_3^i(s)=\bar{\bar{b}}^i s$ and $\bar{\bar{c}}^i={N_p^i(L^i_{qw})}|\hat{\xi}^i|$. The constants are $\bar{b}^i={(L_h^i + L_{hu}^iL_{{k_f}}^i + L_q^i)(L{{_f^i}^{N_p^i}} - 1)}(L_f^i - 1)^{-1}$ and $\bar{\bar{b}}^i=L_{qw}^i({L_{gw}^i}^{N_p^i}-1)(L_{gw}^i - 1)^{-1}$. Clearly, $\tilde{\tilde{u}}^i_{t+1,t+N_c^i}={[u^{i,\star}_{t+1,t+N_c^i-1},k_f^i(x^i_{t+N_c^i})]}^T$ is also feasible control for $x^i\in X_{MPC}^i$ which gives $V(x^i_{t+1},w^i_{t+1})\le\sum\limits_{l = t + 1}^{t + N_p^i} {\{ h(x_l^i,\tilde{\tilde{ u}}_l^i) + q({x_l},\tilde {\tilde{w}}_l^i)\} } + h_f^i({f^i}(x_{t + N_p^i}^i,k_f^i(x_{t + N_p^i}^i))$, where, $\tilde {\tilde{w}}_l$ is NN approximation of $w^i_{t+1}$ and $\tilde{w}_l$ is approximation of $w^i_{t}$, hence $\tilde {\tilde{w}}_l\ne\tilde{w}_l$. Canceling common terms, we get $V(x_{t + 1}^i,w_{t + 1}^i) - V(x_t^i,w_t^i) \le-\alpha _2^i(|x_t^i|)+\sigma _1^i(|w_t^i|)+\sigma _2^i(|w_{t + 1}^i|)+\bar{c}^i$, where $\sigma_1(s)=\sigma_2(s)+\psi^i(s)$, $\sigma_2(s)=\underline{b} s$, $\bar{c}=\sigma_2(\hat{\xi})$ and $\underline{b}=L_{qw}^i({L_{gw}^i}^{N_p^i-1}-1)(L_{gw}^i - 1)^{-1}$. Hence, from Theorem \ref{T:ISpS_gen}, the system \eqref{E:agentsys} under NMPC is ISpS. \end{proof} A method for terminal control law design (by solving \eqref{E:terminal_constraint_condition}) is given: Let $h^i_l={x^i_l}^T{Q^i}{x^i_l}+{u^i_l}^T{R^i}{u^i_l}$, $q^i_l\le{x^i_l}^T{S^i}{x^i_l}+\psi{(|\tilde{w}^i|)}$ and $h^i_f={x^i_f}^T{Q_f^i}{x^i_l}$ , where $Q^i$, $R^i$, $Q_f^i$ and ${S}^i$ are positive definite matrices for $i=1,\dots M$ agents. Let $k_f^i(x^i_l)=K^ix^i_l$ exist, such that $A_{c}^i=A_o^i+B_o^iK^i$ is stable, where $A_o^i$ and $B_o^i$ are the Jacobians of system \eqref{E:agentsys}. The terminal set is defined as $X_f^i\triangleq (x^i)^T Q_f^i x^i\le a$ for some $a\in \mathbb{R}_{\ge 0}$ which satisfies constraints $x^i \in {X^i} $ and $u^i=K^i x^i \in {U^i}$. Let $Q_f^i$ be the solution of the convex problem. \begin{pbm}\label{Pmb:Terminal_Set_Ctrl_Convex} \begin{equation} \mathop {\min }\limits_{{Q_f},{K_f}} \,\left[ { - \log \left( {\det \left( a Q_f^i\right)} \right)} \right] \end{equation} subject to the Lyapunov inequality $A_c^{{i^T}}{Q_f^i}A_c^i - {Q_f^i} + {Q^i} + K^{{i^T}}{R^i}K^i + {(N-1) S^i}\preceq 0$, and $Q_f>0$ \end{pbm} \subsection{Stability of Individual Agents with Collision Avoidance}\label{Sec:Collav} Results of the previous section will now be extended to prove stability of the agents under the collision avoidance scheme described in Section \ref{Sec:DNMPC-CA}. Let $V(x^i_t,{w}^i_t)=J^i_t(x^{i,\star}_t,{w}^i_t)$ be the local ISpS Lyapunov function for $A^i$ without collision avoidance. Let $x^{i,\star}_{t,t+N_p^i}$ be the optimal solution of the cost \eqref{E:cost1} and $\acute x^{i,\star}_{t,t+N_p^i}$ be the optimal solution of the modified cost \eqref{E:colav_cost}. We will prove that $\acute V(\acute x^i_t,{w}^i_t)=J^i_t(\acute x^{i,\star}_t,{w}^i_t)$ is an ISpS Lyapunov function. It is obvious that $d_{ij}(k)\ne 0$ for at least one instant $t\le k \le t+N_p^i$, since otherwise would mean that the current position as well planned optimal trajectories of two agents coincide exactly, which is impossible. We assume that $\underline \kappa^i |\acute x^{i,\star}| \le |x^{i,\star}|\le \overline \kappa^i |\acute x^{i,\star}|$, for some constants $\underline \kappa^i, \overline \kappa^i\ge 0$, since both $ x^{i,\star}$ and $\acute x^{i,\star}$ are finite. This leads to bounds on potential function, i.e. ${\underline \Phi ^i} \le \Phi _t^i \le {\overline \Phi ^i}$ for some constants ${\underline \Phi ^i}, {\overline \Phi ^i}\ge 0$. \begin{thm}\label{T:Collav} For an agent on collision course, the optimal trajectory $\acute x^{i,\star}_{t,t+N_p^i}$ for modified cost \eqref{E:colav_cost} not only guarantees collision avoidance with other agents in the sense of \eqref{E:colav_exp_cond}, but also maintains input-to-state practical stability, if its repulsive spatial filter weights $\lambda(d^{ij}_{k|t})$ are chosen at each instant such that \begin{equation}\label{E:colav_weight_condition} \frac{{{\lambda^i_{\max ,t}}}}{{{\lambda^i_{\min ,t}}}} < \frac{{{\underbar{r}^i}(|{x_t}|){{\{ N_p^iR_{\min }^i + N_p^i(N_p^i - 1){v_{\max }}\} }^{ - 1}}}}{{(N_p^i - 1)(L_{hx}^i + L_{qx}^i) + {L_{hf}}}}\triangleq\bar{a}_t \end{equation} \end{thm} \begin{proof} The proof consists of two parts. We first show that negative gradient of modified cost \eqref{E:colav_cost} lies in the direction of expanding weighted average distance $\bar{d}^{ij}_t$ between agents on collision course. Hence, the optimal trajectory $\acute x^{i,\star}_{t,t+N_p^i}$ reaches the terminal set by avoiding collision in the sense of \eqref{E:colav_exp_cond}. Next, we will show that the optimal trajectory in that direction is also ISpS stable. From \eqref{E:colav_cost}, we can see that $\frac{{\partial\acute J_t^i}}{{\partial \bar d_t^{ij}}} = \frac{{\partial J_t^i}}{{\partial \bar d_t^{ij}}}(1 + \Phi _t^i) + J_t^i\frac{{\partial \Phi _t^i}}{{\partial {{\bar d}^{ij}_t}}}$. Since $\partial \Phi_t^i/\partial\bar{d}_t^{ij}=-\Phi_t^i/\bar{d}^{ij}_t<0$ and $J_t^i, \Phi_t^i>0$, in order to have $\partial\acute J_t^i/\partial \bar d_t^{ij}<0$, we have $\frac{{\partial J_t^i}}{{\partial \bar d_t^{ij}}} < \frac{{\Phi _t^i}}{{1 + \Phi _t^i}}\frac{{J_t^i}}{{\bar d_t^{ij}}} < \frac{{J_t^i}}{{\bar d_t^{ij}}} $. Since $J_t^i,d_t^{ij}>0$, this condition can be satisfied if $\max \left| {\frac{{\partial J_t^i}}{{\partial \bar d_t^{ij}}}} \right| < \frac{{\min (J_t^i)}}{{\max (\bar d_t^{ij})}}$. For RHS, note that by chain rule of differentiation and using triangle inequality, $\left| {\frac{{\partial J_t^i}}{{\partial \bar d_t^{ij}}}} \right| \le \sum\limits_{k = t}^{t + N_p^i} {\left| {\frac{{\partial J_t^i}}{{\partial x_k^i}}} \right|} \left| {\frac{{\partial x_k^i}}{{\partial d_k^{ij}}}} \right|\left| {\frac{{\partial d_k^{ij}}}{{\partial \bar d_t^{ij}}}} \right|$. With slight abuse of notation we can write $d^{ij}_k=|x^i_k-w^i_k|$. For given neighbor trajectory $w^i_k=x^j_k, \forall j\in G^i$, we have $\partial d^{ij}_k/\partial x^i_k=(x^i_k-w^i_k)/d^{ij}_k$ such that $|d^{ij}_k/\partial x^i_k|=1$. Similarly, $\partial\bar{d}^{ij}_t/\partial d^{ij}_k=\lambda^i_k$, which results in $\left| {\frac{{\partial J_t^i}}{{\partial \bar d_t^{ij}}}} \right| \le \sum\limits_{k = t}^{t + N_p^i} {\frac{1}{{\lambda _k^i}}\left| {\frac{{\partial J_t^i}}{{\partial x_k^i}}} \right| < } \frac{1}{{\lambda _{\min ,t}^i}}\sum\limits_{k = t}^{t + N_p^i} {\left| {\frac{{\partial J_t^i}}{{\partial x_k^i}}} \right|}$. Now, from \eqref{E:cost1}, we get \begin{eqnarray}\label{E:max_cost_grad} \begin{array}{l} \max \left| {\frac{{\partial J_t^i}}{{\partial \bar d_t^{ij}}}} \right| < \frac{{(N_p^i - 1)(L_h^i + L_q^i) + L_{hf}^i}}{{\lambda _{\min ,t}^i}} \end{array} \end{eqnarray} Now, maximum $\bar{d}^{ij}_t$ can occur when the minimum distance between agents on collision course is $R^i_{\min}$ and then move away from each other at $v_{\max}$, i.e. $\max (\bar d_t^{ij}) = \sum\limits_{k = t}^{t + N_p^i} \lambda_k^i{(R_{\min }^i + 2(k - t){v_{\max }})} <\lambda^i_{\max,t}(N_p^iR_{\min }^i + N_p^i(N_p^i - 1){v_{\max }})$. Also, as noted in Theorem \ref{T:ISpS_spec}, $\min({J_t^i})\le V^i_t\le\underbar{r}^i(|x^i_t|)$. This can be combined with \eqref{E:max_cost_grad} to result in the condition specified in \eqref{E:colav_weight_condition}. Hence, the minimum of modified cost lies in the direction of collision avoidance in the sense of \eqref{E:colav_exp_cond}. Since any feasible trajectory for cost \eqref{E:cost1} is also feasible for modified cost \eqref{E:colav_cost} and the reachable set is compact, an optimum almost always exists, unless there is not enough time to maneuver (to cater for which we have placed a conservative bound on $\Delta^{ij}_t\le\bar{\Delta})$. For the next part of this proof, note that $\acute J(\acute x_t^{i,\star},{w}^i_t)\le \acute J( x_t^{i,\star},{w}^i_t)$ and $ J(x^{i,\star}_t,{w}^i_t)\le J( \acute x^{i,\star}_t,{w}^i_t)$, since $\acute x^{i,\star}_{t,t+N_p^i}$ is feasible but suboptimal control for minimization of \eqref{E:cost1} and $ x^{i,\star}_{t,t+N_p^i}$ is suboptimal for \eqref{E:colav_cost}. For conciseness, we will ignore the difference between $V$ and $J$ in this section and also drop the $\star$ symbol. From Theorem \ref{T:ISpS_spec}, we have $\alpha^i_1 (|x_t^i|)\le V(x^i_t,{w}^i_t)$, which gives $\alpha^i_1 (\underline \kappa^i |\acute x_t^i|)\le V(x^i_t,{w}^i_t)\le V(\acute x^i_t,{w}^i_t)$. Combining this with \eqref{E:colav_cost} and defining $\acute \alpha^i_1(s)\triangleq(1+\underline \Phi^i)\alpha^i_1 (\underline \kappa^i s) \in\mathcal{K}_\infty$, we get $\acute \alpha^i_1 (|\acute x_t^i|)\le \acute V(\acute x^i_t,{w}^i_t)$. Let $\ V(\acute x^i_t,{w}^i_t)- V(x^i_t,{w}^i_t) \le \varkappa^i$ for some constant $\varkappa^i>0$. Defining $\acute \alpha^i_3(s)\triangleq (1+\bar \Phi^i)\alpha^i_3 (\bar \kappa^i s)\in\mathcal{K}_\infty$, ${\acute \sigma _3}(s)\triangleq(1+\bar \Phi^i)\sigma _3(s)\in\mathcal{K}$ and $ \acute{\acute{c}}^i\triangleq(1+\bar \Phi^i)(\bar{\bar{c}}^i+\varkappa^i)$, we get $\acute V\left( {\acute x_t^i,w_t^i} \right) \le {\acute \alpha _3}( | {\acute x_t^i} | ) + {\acute \sigma _3}\left( {\left| {w_t^i} \right|} \right) + \acute{ \acute{ c}}^i$. Using \eqref{E:colav_cost}, and defining $\acute \alpha^i_2(s)\triangleq(1+\underline \Phi^i)\alpha^i_2 (\underline \kappa^i s) \in\mathcal{K}_\infty$, ${\acute \sigma _{1,2}}(s)\triangleq(1+\bar \Phi^i)\sigma _{1,2}(s)\in\mathcal{K}$, $ {\acute{c}}^i\triangleq(1+\bar \Phi^i)({\bar{c}}^i+ \varkappa^i)$, we get $\Upsilon^i_{t+1} \acute V\left( {\acute x_{t + 1}^i,w_{t + 1}^i} \right) - \acute V\left( {\acute x_t^i,w_t^i} \right) \le - {\acute \alpha _2}\left( { \left| {\acute x_t^i} \right|} \right) + {\acute \sigma _1}\left( {\left| {w_t^i} \right|} \right) + {\acute \sigma _2}\left( {\left| {w_{t + 1}^i} \right|} \right) + {{\acute c}^i}$, where,$\Upsilon _{t + 1}^i \triangleq \frac{{1 + \Phi _{t+1}^i}}{{1 + \Phi _{t}^i}}$. From \eqref{E:colav_pot}, $\Upsilon^i_{t+1}\ge 1$ if \eqref{E:colav_exp_cond} holds and we can write $ \acute V\left( {\acute x_{t + 1}^i,w_{t + 1}^i} \right) - \acute V\left( {\acute x_t^i,w_t^i} \right) \le - {\acute \alpha _2}\left( { \left| {\acute x_t^i} \right|} \right) + {\acute \sigma _1}\left( {\left| {w_t^i} \right|} \right) + {\acute \sigma _2}\left( {\left| {w_{t + 1}^i} \right|} \right) + {{\acute c}^i}$. Hence, agent $A^i$ is ISpS according to Theorem \ref{T:ISpS_gen} and moves towards its goal in an optimal manner while avoiding collision with other agents. \end{proof} \newtheorem{corol}{Corollary} \begin{corol}\label{Corol:GP Spatial Filter} If spatial filter for collision avoidance is shaped as a geometric progression $\lambda^i_{k|t}=\lambda^i_{\max,t}r_t^l$ such that $d^{ij}_l>d^{ij}_{l+1}$ for $l=0,\dots N_p^i-1$, then the filter can be designed by specifying $\bar{b}>1$, $\lambda^i_{\max,t}$ and calculating $r_t={(\bar{b}\bar{a}_t)}^{{-1}/{(N_p^i-1)}}$ from \eqref{E:colav_weight_condition}. \end{corol} \subsection{Stability of Team of Agents under NMPC}\label{Sec:TeamNet} We will establish a generalized small gain condition to prove stability of the interconnected system, for both strongly- and weakly-connected network topologies. The result is general, not limited by the number of subsystems and the way in which subsystem gains are distributed is arbitrary. \begin{thm}\label{T:SGC} For a team of agents $A^i$ \eqref{E:sys_cl}, each with local ISpS Lyapunov function $V(x^i_t,w^i_t)$, there exists $\bar{\alpha}\in\mathcal{K}_\infty$ such that $V(x_{t + 1}^i,w_{t + 1}^i) - V(x_t^i,w_t^i) \le \bar{\alpha}^i(|x^i_t|)$. Let the ISpS Lyapunov gain from $A^i$ to $A^j\in G^i$ be denoted by the function $\bar\gamma_{ij}(s):\mathcal{R}_{\ge 0}\to\mathcal{R}_{\ge 0}$ and given by \begin{equation}\label{E:nlgains} \bar\gamma_{ij}(s)\triangleq \alpha_1^i \circ (\bar{\alpha}^i)^{-1} \circ \sigma_1^i \circ ({\alpha_1^j})^{-1}(s), \end{equation} then the team of agents is ISpS stable if the network is at least weakly connected, as long as the following small gain condition is satisfied \begin{equation}\label{E:SGC_expanded} V(x_t^i,w_t^i) > \mathop {\max}\limits_{j \in {G^i},j \ne i} \{{\bar\gamma _{ij}}(V(x_t^j,w_t^j))\} \end{equation} \end{thm} \begin{proof} Consider $\bar{\rho}^i\in\mathcal{K}_\infty$. Let $V(x_{t + 1}^i,w_{t + 1}^i)-V(x_t^i,w_t^i)\le-\alpha_2^i(|x_t^i|)+\sigma_1^i(|w_t^i|)+\sigma_2^i(|w_{t+1}^i|)+\bar{c}^i\le\bar{\rho}^i\circ\alpha_2^i(|x_t^i|)$ for $x^i_t\in X^i\backslash\mathcal{B}^n(c^i)$ and for $\bar{\rho}^i\in \mathcal{K}_\infty$ constructed such that $\sigma _1^i(|w_t^i|) + \sigma _2^i(|w_{t + 1}^i|) + {\bar c^i} \le ({\cal I} + {\bar \rho ^i})^\circ \alpha _2^i\left( {|x_t^i|} \right)$. Then, in view of \eqref{E:lyapISS_cond1} and letting $\bar{\alpha}^i\triangleq(\mathcal{I}+\bar{\rho} ^i)\circ \alpha _2^i$, we get $V(x_t^i,w_t^i) \ge \alpha _1^i \circ {(\bar{\alpha}^i)^{ - 1}} \circ \sigma _1^i(|w_t^i|)$. Now, since $w^i_t=col(x^j_{t,t+N^j_p})$, then $|w_t^i| \ge \mathop {\max }\limits_j |x_t^j| \ge |x_t^j|$, $\forall j\in G^i$. Hence, $V(x_t^i,w_t^i) \ge \mathop {\max }\limits_j (\alpha _1^i \circ {(\bar\alpha^i)^{ - 1}} \circ \sigma _1^i(|x_t^j|))$. But, $V(x_t^j,w_t^j) \ge \alpha _1^j (|x_t^j|)$ $\Rightarrow {(\bar\alpha^j)^{ - 1}}(V(x_t^j,w_t^j))\ge |x_t^j|$, and hence $V(x_t^i,w_t^i) \ge \mathop {\max}\limits_j (\alpha _1^i \circ {(\bar\alpha^i)^{ - 1}} \circ \sigma _1^i \circ {(\alpha _1^j)^{ - 1}}(V(x_t^j,w_t^j))$. If gain $\bar\gamma_{ij}$ is defined as in \eqref{E:nlgains}, then \eqref{E:SGC_expanded} is obtained. From recent results in \cite{Dashkovskiy2010}, it can be shown that this is equivalent to having an ISpS Lyapunov function for the network. \end{proof} \subsection*{Remark 1} One way to design $\bar{\alpha}^i$ is by choosing $\bar{\rho}^i(s)=\bar{k}^i s, \forall \bar{k}^i>0$, since it was shown that $V^i_{t+1}-V^i_{t}<0$. This choice results in stable network, provided that individual agents are locally ISpS. We take the case of agents not on collision course first. \subsubsection*{Agents not on collision course} Continuing from proof of Theorem 2, and letting ${\lambda _\Pi }_{\max }$ and ${\lambda _\Pi }_{\min }$ be the maximum and minimum eigenvalues of a p.d. matrix $\Pi$, respectively. Then, \begin{equation*} \sigma _1^i\left( r \right) = \sigma _2^i\left( r \right) + {\psi ^i}\left( r \right) = \frac{{L_{qw}^i\left( {L{{_{gw}^i}^{N_p^i - 1}} - 1} \right)}}{{L_{gw}^i - 1}}r + \left( {M - 1} \right){\lambda _{S_{max}^i}}{r^2},\;\forall L_{gw}^i \ne 1, \end{equation*} For $L_{gw}=1$, the results need trivial modifications, by replacing $L_{qw}^i\left( {L{{_{gw}^i}^{N_p^i - 1}} - 1} \right){\left( {L_{gw}^i - 1} \right)}^{ - 1}$ with $\mathop \sum \limits_{l = 0}^{l = N_p^i - 2} {L{_{gw}^i}^l} = N_p^i - 1$. Similarly, \begin{equation*} \mathop {\alpha _1^j}\nolimits^{ - 1} (r) = \sqrt {\frac{r}{{{\lambda _{\min {Q^j}}}}}} , \end{equation*} \begin{equation*} {\bar \alpha ^{{i^{ - 1}}}}\left( r \right) = \alpha _2^{{i^{ - 1}}} \circ {\left( {I + {{\bar \rho }^i}} \right)^{ - 1}}\left( r \right) \end{equation*} and \begin{equation*} {L_{qw}} = {\lambda _S}_{max}|\tilde w_{max}^i| \end{equation*} We mentioned that one choice of ${\bar \rho ^i}$ could be ${\bar \rho ^i}\left( r \right) = {\bar k^i}r$ for all ${\bar k^i} > 0$. Therefore, \begin{equation*} {\bar \alpha ^{{i^{ - 1}}}}(r) = {\bar \alpha _2}^{{i^{ - 1}}}\left( {\frac{1}{{{{\bar k}^i} - 1}}r} \right) \end{equation*} It is also worth noting that we showed in the proof of Theorem \ref{T:ISpS_spec} that $\alpha_2(r)=\alpha_2(r)$. Since ${{\bar \gamma }_{ij}}(r) = \alpha _1^i \circ {{\bar \alpha }^{{i^{ - 1}}}} \circ \sigma _1^i \circ \alpha _1^{{j^{ - 1}}}(r)$, we can obtain \begin{equation*} {\bar \gamma _{ij}}\left( r \right) = \frac{1}{{{{\bar k}^i} + 1}}\left( {N_p^i - 1} \right){\lambda _{S_{max}^i}}\tilde w_{max}^i\sqrt {\frac{r}{{{\lambda _{min{Q^j}}}}}} + \frac{1}{{{{\bar k}^i} + 1}}\left( {M - 1} \right){\lambda _{S_{max}^i}}\left( {\frac{r}{{{\lambda _{min{Q^j}}}}}} \right) \end{equation*} Hence, \eqref{E:SGC_expanded} can be written as \begin{multline*} {{\bar \gamma }_{ij}}\left( {V\left( {{x^i},{{\tilde x}^j}} \right)} \right) = \left[ {\frac{1}{{{{\bar k}^i} + 1}}\left( {N_p^i - 1} \right)\;\;\tilde w_{max}^i\frac{{\lambda _{{S_{max}}}^i}}{{\sqrt {{\lambda _{min{Q^j}}}} }}} \right]\sqrt {V\left( {{x^i},{{\tilde x}^j}\;} \right)} \\ + \left[ {\frac{1}{{{{\bar k}^i} + 1}}\left( {M - 1} \right)\left( {\frac{{{\lambda _{S_{max}^i}}}}{{{\lambda _{min{Q^j}}}}}} \right)} \right]V\left( {{x^i},\;{{\tilde x}^j}} \right) \end{multline*} Therefore, by choosing a suitable value of ${{{\lambda _{S_{max}^i}}} \mathord{\left/ {\vphantom {{{\lambda _{S_{max}^i}}} {{\lambda _{min{Q^j}}}}}} \right. \kern-\nulldelimiterspace} {{\lambda _{min{Q^j}}}}}$ and $\bar{k}^i>0$, the small gain condition \eqref{E:SGC_expanded} can be satisfied. \subsubsection*{Agents on collision course} For agents on collision course, similar results can be reproduced as all functions have corresponding counterparts in collision avoidance case, see proof of Theorem \ref{T:Collav}. Therefore we can write \eqref{E:nlgains} as $\bar\gamma'_{ij}(r)\triangleq \alpha_1^{'i} \circ (\bar{\alpha}^{'i})^{-1} \circ \sigma_1^{'i} \circ ({\alpha_1^{'j}})^{-1}(r)$. Thus, we get \begin{equation*} {\bar \gamma' _{ij}}\left( r \right) = \frac{{\left( {1 + {{\bar \phi }^i}} \right)}}{{\left( {{{\bar k}^i} + 1} \right){\underline{\kappa} ^j}}}\left( {\left( {N_p^i - 1} \right)\;\;|\tilde w_{max}^i|\frac{{\lambda _{{S_{max}}}^i}}{{\sqrt {\left( {1 + {\underline\phi ^j}} \right){\lambda _{min{Q^j}}}} }}{{\left( r \right)}^{\frac{1}{2}}} + \frac{{\left( {M - 1} \right){\lambda _{S_{max}^i}}}}{{{\underline\kappa ^j}\left( {1 + {\underline\phi ^j}} \right){\lambda _{min{Q^j}}}}}r} \right) \end{equation*} Hence, even with collision avoidance, it possible to find $\bar{k}^i>0$ which satisfies the small gain condition (see \ref{Sec:Sim_res}). As far as the small gain condition for weakly connected networks is concerned, we show in Remark 2 that the small gain condition is equivalent to that for strongly connected networks. It should be noted that there is no need to find the exact numerical values for construction of controller. As long as there exists some $\bar{k}^i>0$, we can be assured of ISpS stability of the team. See Section \ref{Sec:Sim_res}. \subsubsection{Strongly Connected Network} We will now particularize the result of Theorem \ref{T:SGC} to the case of strongly connected network. \newtheorem{lem}{Lemma} \begin{lem} A team of $N$ agents connected with a strongly connected network is ISpS stable if each agent $A^i$ has an ISpS Lyapunov function $V(x^i_t,w^i_t)$, edge gain $\bar\gamma_{ij}$ is defined as in \eqref{E:nlgains} and the following small gain condition is achieved: \begin{equation} V(x_t^i,w_t^i) > \mathop {\max }\limits_{j} ({\bar\gamma _{ij}}(V(x_t^j,w_t^j))), \,\,\, \forall j \ne i,j=1,\dots,N-1 \end{equation} \end{lem} \begin{proof} If $\bar\mu^i$ is a monotone aggregation function (MAF) and $\Gamma$ is its irreducible gain matrix, define the gain operator $\Gamma_{\bar\mu}: \mathbb{R}^n_+ \to \mathbb{R}_+^n$, \begin{equation} {\Gamma _{\bar\mu} }\triangleq\left[ {\begin{array}{*{20}{c}} {{r_1}}\\ \vdots \\ {{r_n}} \end{array}} \right] \mapsto \left[ {\begin{array}{*{20}{c}} {{\mu _1}({\bar\gamma _{12}}({r_1}), \ldots ,{\bar\gamma _{1n}}({r_n}))}\\ \vdots \\ {{\mu _n}({\bar\gamma _{n,1}}({r_1}), \ldots ,{\bar\gamma _{n,n - 1}}({r_{n - 1}}))} \end{array}} \right] \end{equation} According to the recent generalized small gain theorems of \cite{Dashkovskiy2010}, if a strongly connected network obeys the following small gain condition (SGC): $\mathcal{I}>\Gamma_{\bar\mu}$, then it is stable in the ISS sense (see Theorem 5.3 of \cite{Dashkovskiy2010}). Now, $\bar\mu=\max$ is a monotone aggregation function (\cite{Jiang2011Book}). Let $r=(V(x^i_t,w^i_t),\dots,V(x^N_t,w^N_t))$, then the SGC is satisfied if: \begin{equation*} \begin{array}{l} \left[ {\begin{array}{*{20}{c}} {V(x_1^t,w_1^t)}\\ \vdots \\ {V(x_N^t,w_N^t)} \end{array}} \right] > \left[ {\begin{array}{*{20}{c}} {\max ({\bar\gamma _{12}}(V(x_2^t,w_2^t)), \ldots ,{\bar\gamma _{1,N}}(V(x_N^t,w_1^t)))}\\ \vdots \\ {\max ({\bar\gamma _{N,1}}(V(x_1^t,w_1^t)), \ldots ,{\bar\gamma _{N,N - 1}}(V(x_{N - 1}^t,w_{N - 1}^t)))} \end{array}} \right] \end{array} \end{equation*} This can be simply stated as: \begin{equation*} V(x_t^i,w_t^i) > \mathop {\max }\limits_{j} ({\bar\gamma _{ij}}(V(x_t^j,w_t^j))), \,\,\, \forall j \ne i,j=1,\dots,N-1 \end{equation*} \end{proof} \subsubsection{Weakly Connected Network} We will now focus on the case of a network of agents, in which not all agents are connected to every other agent. \begin{lem} A team of cooperating agents connected with a weakly connected network is ISpS stable if each agent $A^i$ has an ISpS Lyapunov function $V(x^i_t,w^i_t)$, edge gain $\bar\gamma_{ij}$ is defined as in \eqref{E:nlgains} and the following small gain condition is achieved: \begin{equation*} V(x_t^i,w_t^i) > \mathop {\max }\limits_{j} ({\bar\gamma _{ij}}(V(x_t^j,w_t^j))), \,\,\, \forall j \ne i,j\in G^i \end{equation*} \end{lem} \begin{proof} The connectivity gain matrix for a weakly connected network can be brought in upper block triangular form by appropriate re-indexing of agents, such that each upper block on the diagonal is either $0$ or irreducible. Hence, we can now rewrite the gain matrix as: \begin{equation*} \Gamma = \left[ {\begin{array}{*{20}{c}} 0&{{\bar\gamma _{12}}}&{{\bar\gamma _{13}}}& \ldots &{{\bar\gamma _{1,\bar M}}}\\ 0& \ddots &{{\bar\gamma _{23}}}& \ldots &{{\bar\gamma _{2,\bar M}}}\\ \vdots & \ddots & \ddots & \ddots & \vdots \\ {}&{}& \ddots & \ddots &{{\bar\gamma _{N - 1,\bar M}}}\\ 0& \ldots &{}&0&0 \end{array}} \right] \end{equation*} where $\bar{M}\triangleq\max\limits_{i}M^i$ is the size of neighborhood of the most connected agent. According to Proposition 6.2 of \cite{Dashkovskiy2010}, the interconnected system is stable if each upper diagonal block satisfies the SGC: $\mathcal{I}>\Gamma_{\bar\mu}$,. Now, the upper diagonal blocks are: \begin{equation*} \begin{array}{l} {\overline \Gamma _1} = 0,\,\,\,{\overline \Gamma _2} = \left[ {\begin{array}{*{20}{c}} 0&{{\bar\gamma _{12}}}\\ 0&0 \end{array}} \right],\,\,{\overline \Gamma _3} = \left[ {\begin{array}{*{20}{c}} 0&{{\bar\gamma _{12}}}&{{\bar\gamma _{13}}}\\ 0&0&{{\bar\gamma _{23}}}\\ 0&0&0 \end{array}} \right]\\ \\ {\overline \Gamma _d} = \left[ {\begin{array}{*{20}{c}} 0&{{\bar\gamma _{12}}}&{{\bar\gamma _{13}}}& \ldots &{{\bar\gamma _{1,d}}}\\ 0& \ddots &{{\bar\gamma _{21}}}& \ldots &{{\bar\gamma _{2,d}}}\\ \vdots & \ddots & \ddots & \ddots & \vdots \\ {}&{}& \ddots & \ddots &{{\bar\gamma _{d - 1,d}}}\\ 0& \ldots &{}&0&0 \end{array}} \right],\,\,\,{\overline \Gamma _N} = \Gamma \end{array} \end{equation*} Then stability is assured if each of the above blocks obey the SGC iteratively, i.e. \begin{equation} \begin{array}{l} {r_1} > {\overline \Gamma _{\bar\mu 1}}\,({r_1}) \Rightarrow \,\,V(x_t^1,w_t^1) > 0\\ {r_2} > {\overline \Gamma _{\bar\mu 2}}({r_2}) \Rightarrow \,\,\,V(x_t^1,w_t^1) > {\bar\gamma _{12}}(V(x_t^2,w_t^2)) ,V(x_t^2,w_t^3) > 0\\ {r_3} > {\overline \Gamma _{\bar\mu 3}}({r_3}) \Rightarrow\,\,\, V(x_t^1,(x_t^1,w_t^1)) > \max ({\bar\gamma _{12}}(V(x_t^2,w_t^2))\,,{\bar\gamma _{13}}(V(x_t^3,w_t^3))\,)\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,V(x_t^2,w_t^2) > {\bar\gamma _{23}}(V(x_t^3,w_t^3), V(x_ t^3,w_t^3)) > 0\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \vdots \end{array} \end{equation} This iterative procedure reduces to \eqref{E:SGC_expanded}. Hence, the team is stable irrespective of the network topology as long as it is at least weakly connected, provided it obeys certain small gain conditions. \end{proof} \section{Simulation Results}\label{Sec:Sim_res} Consider a fleet of 5 autonomous vehicles moving in the horizontal plane, with the following continuous-time models (discretized at $T$=0.1s) having similar dynamics (for simplicity): ${m^i}{{\ddot x}^i} = - \mu _1^i{{\dot x}^i} + \left( {u_R^i + u_L^i} \right)\cos {\theta ^i}, {m^i}{{\ddot y}^i} = - \mu _1^i{{\dot y}^i} + \left( {u_R^i + u_L^i} \right)\sin {\theta ^i}, {J^i}{{\ddot \theta }^i} = - \mu _1^i{{\dot \theta }^i} + \left( {u_R^i + u_L^i} \right){r_v}$ where $m$ ,$J$, $\mu_{1,2}$ and $r_v$ are parameters specified in \cite{Franco2008}). Constraints on inputs and states are $0\le|u^i_{R,L}|\le 6$ $|\dot{\theta}|\le 1\,rad/s$. Uniformly distributed communication delay is bounded by $T\le\Delta_{ij}\le 6T$. Distributed cost for each agent (leader $A^1$): \begin{equation*}\label{sim_cost} \begin{array}{l} J_t^i = \sum\limits_t^{t + N_p^i - 1} {\left( {\left\| {{{\bar z}^i}_k} \right\|_{{Q^i}}^2 + \left\| {u_k^i} \right\|_{{R^i}}^2 + \sum\limits_{j \in {G^i}} {\left\| {\bar{ \bar{z}}_k^i} \right\|_{{S^{ij}}}^2} } \right)} {\kern 1pt} {\kern 1pt} \\ \,\,\,\,\,\,\,\,\,\, + \left\| {\bar z_{t + N_p^i}^i} \right\|_{Q_f^i}^2 + \sum\limits_{j \in {G^i}} {\frac{{\bar \lambda {R_{{{\min }^i}}}{{\bf{1}}_{({R_{{{\min }^j}}} - d_k^{ij}) > 0,\forall t \le k \le (t + N_p^i)}}}}{{\sum\limits_{k = t}^{t + N_P^i} {\lambda (d_k^{ij})d_k^{ij}} }}} \end{array} \end{equation*} where $\bar z^i_k=z_k^i - g^i_k + a^{i1}$ and $\bar{ \bar{z}}^i_k=z_l^i - \tilde w_l^j + a^{ij}$. Goal $g^i_k$ is the way-point (WP) for leader and for followers it is the leader's planned trajectory, i.e. $g^i=\tilde w^1_l, \forall{i\ne{1}}$. Alignment vectors $a^{ij}$ define the formation geometry such that adjacent agents occupy positions 15 units apart in a 30 units equilateral triangle with same speed and direction. Optimization parameters for all agents are: $N_p$=50, $N_c$=15, $Q$=0.1 $diag({1,1,10,1,10,1})$, $R$=0.01 $diag({1,1})$, $S^{ij}$=0.25 $Q$ and $S^{1j}$=0.2 $S^{ij}$, for $i=1\dots 5$, $j\in{G^{i\backslash{1}}}$. For spatially filtered CA potential \eqref{E:colav_weight_condition}, parameters are $R_{\min}=5m$,$v_{\max}=40m/s$, $L^i_{hx}=\lambda_{Q,\max} |\bar z_o^i|$, $L^i_{qx}=(N-1)\lambda_{S,\max} |\bar z_o^i|$, $L^i_{hf}=\lambda_{Qf,\max} |\bar z_o^i|$ and $\underbar{r}^i=\lambda_{Q,\min} |\bar z^i_k|^2$, where $\lambda_\Pi$ is eigenvalue of $\Pi$. Local control $K^i$ and terminal weight $Q_f^i$ can be determined solving the LMI equation presented in Section \ref{Sec:Stab}. Simulations were run on 3.3 GHz Intel (quad) Core i7-2500 using parallelized Matlab code, where 1 simulation second took 94 CPU seconds (which can be reduced on dedicated hardware and optimized code). For NN we use a network with 6 inputs, $H_L$=6 hidden layer neurons and 6 outputs. Thus there are 84 NN weights and biases as opposed to full trajectory of 300 floating-points (data compression of 72\%). We only show results of weakly connected network due to lack of space. $A^{4}$ and $A^5$ have only directed link from $A^{2}$ and $A^3$, making the network topology weakly connected, see inset of Fig.\ref{F:big_traj}. Executing sharp turns, such as right angle turns when transitioning between WPs puts agents on the inside of the turn ($A^{2}$, $A^4$) at risk of collision. Also, $A^{4,5}$ receive WP information, with extra delay due to multiple hops, i.e. $\bar{\Delta}_4=\bar{\Delta}_5=2\bar{\Delta}$. However, collision is successfully avoided throughout the trajectory. Synchronization of states is achieved quickly, as shown in Fig. \ref{F:big_states}. The effect of delay is manifest in lag in synchronization, while temporary divergence is due to collision avoidance. It is evident that the proposed algorithm performs well despite large random delays. \begin{figure} \begin{center} \epsfig {file=traj_bigteam3.eps, width=12.5cm} \caption{Trajectory of agents in weakly connected network (inset: net topology).} \label{F:big_traj} \end{center} \end{figure} \begin{figure} \begin{center} \epsfig{file=states_bigteam2.eps,width=12.5cm} \caption{States of agents connected in weakly connected network.} \label{F:big_states} \end{center} \end{figure} In the given example, cost function and corresponding gain is shown in Fig. \ref{F:big_sgc1} to illustrate verification of small gain condition \eqref{E:SGC_expanded} from Theorem \ref{T:SGC} for $\bar k ̅^i=5\times 10^3$. ). The condition for only Agent 1 (connected to Agents 2 and 3 in the weakly connected network) is shown. However, small gain conditions hold for all the other agents (results not shown in interest of brevity). \begin{figure} \begin{center} \epsfig{file=sgc_bigteam.eps,width=12.5cm} \caption{Small gain condition for Agent 1} \label{F:big_sgc1} \end{center} \end{figure} \section{Conclusion}\label{Sec:concl} We presented distributed NMPC framework for formation control of constrained agents robust to uncertainty due to data compression and propagation delays. Collision avoidance is ensured by means of spatially filtered potential field. Rigorous proofs are provided ensuring practical stability regardless of network topology. Simulations illustrate good performance of the proposed scheme in both strongly- and weakly-connected networks. Future research directions include the need to cater for model uncertainty, disturbances and fault tolerance. \section*{ Acknowledgment} Support provided by King Abdulaziz City for Science \& Technology through King Fahd University of Petroleum \& Minerals for this work through project No. 09-SPA783-04 is acknowledged. \bibliographystyle{IEEEtran}
{ "timestamp": "2015-04-15T02:07:36", "yymm": "1504", "arxiv_id": "1504.03494", "language": "en", "url": "https://arxiv.org/abs/1504.03494" }
\section{Introduction} \object{NGC 3145} at a distance of 54.8 Mpc is a barred spiral galaxy with grand-design spiral arms and some peculiar stellar morphology. In the Hubble Atlas of Galaxies, \citet{sandage61} writes about NGC 3145 ``There is a single faint arm in the southwest quadrant which crosses one of the regular arms nearly at right angles. This is a very rare feature of galaxies ...'' We were intrigued by this description and by the presence of three additional places where stellar arms also appear to cross, forming ``X''-features, in NGC 3145. The goal of this paper is to understand these puzzling features by investigating what is happening in the interstellar gas at the ``X''-features and elsewhere in the galaxy. NGC 3145 has two smaller companions: the barred spiral \object{NGC 3143} and the Sdm galaxy \object{PGC 029578}. We refer to these three galaxies as the NGC 3145 triplet. The top panel in Figure\,\ref{B.Features} displays a $B$-band image of NGC 3145 with some of the optical oddities marked, and the bottom panel displays the {\it Hubble Space Telescope} (HST) $R$-band image of the southern half of this galaxy. On the southern side of the galaxy, arms cross and outline an apparent optical triangle with base $12''$, height $15''$, and apices labelled $a$ (the eastern apex), $b$ (the southern apex), and $c$ (the western apex). We refer below to this feature as the {\it triangle}. Sandage's peculiar arm outlines the eastern edge of the {\it triangle}. It crosses the inner spiral arm, forming an ``X''-feature at apex $a$; at apex $b$ it produces another ``X''-feature. At the location marked $f$ in Figure\,\ref{B.Features}, it creates what looks like another ``X''-feature. The inner spiral arm, which has a major dust lane along its concave side, outlines the northern edge of the {\it triangle}, connecting apices $a$ and $c$. Another arm connects apices $b$ and $c$ to form the western edge of the {\it triangle}. At apex $b$ it meets Sandage's peculiar arm. Southward of apex $b$ the two arms diverge, and Sandage's peculiar arm extends to the west. We call the latter the western antenna and label it $e$ in this Figure. On the northeastern side of NGC 3145, we label as Feature $d$ the location where a feature that optically looks like a spiral-arm branch departs from the main spiral arm. We shall refer to this as the {\it branch} even though the data presented here suggest that it is a tidal arm. \begin{figure*} \epsscale{0.75} \plotone{fig1top.eps} \plotone{fig1bot.eps} \caption{Top: $B$ image of NGC 3145 with some optical anomalies marked, e.g. ``X''-features where arms appear to cross at apices $a$, $b$, and $c$ of the {\it triangle} and at location $f$. Sandage's peculiar arm outlines the eastern edge of the {\it triangle} from apex $a$ to apex $b$. The apparent {\it branch} of a main spiral arm is labelled $d$, and the western antenna is labelled $e$. Bottom: {\it HST} WFPC2 $R$ image of NGC 3145. \ion{H}{1}\ line-profiles for the regions enclosed by the boxes will be displayed and discussed in Section 4. \label{B.Features}} \end{figure*} In his study of caustic waveforms in two dimensions, \citet{struck90} notes that the {\it triangle} in NGC 3145 resembles a swallowtail caustic. In a swallowtail caustic, five intersecting star streams create a triangular region outlined by arms plus two thin antennae emerging from one of the triangle's vertices. In his study, swallowtail caustics develop within expanding pseudo-rings. Gas streams do not pass through each other without colliding and shocking. To study the above optical anomalies and look for other unusual features in the gas, we made Karl G. Jansky Very Large Array (VLA)\footnote{The National Radio Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.} \ion{H}{1}\ observations of the NGC 3145 triplet and $\lambda 6$ cm\ radio continuum observations of NGC 3145. We use the radio continuum image to check on whether there are strong shock fronts at the ``X''-features as would be expected if the intersecting arms are in the same plane. The purpose of the \ion{H}{1}\ data is to look for evidence of gas flows in extra-planar gas and gas flows perpendicular to the disk over large spatial scales, e.g., are there multiple components in the \ion{H}{1}\ line profiles. For edge-on galaxies displacements above the disk would be obvious. Since the inclination $i$ of NGC 3145 is $50\degr - 55\degr$, we get only indirect information. \begin{deluxetable*}{lccc} \tabletypesize{\scriptsize} \tablecaption{Basic Optical and Near-Infrared Data on the NGC 3145 Triplet\tablenotemark{a} \label{basic.optical}} \tablewidth{0pt} \tablehead{ \colhead{Characteristic} & \colhead{NGC 3145} & \colhead{NGC 3143} & \colhead{PGC 029578} } \startdata Morphological type & SB(rs)bc & SB(s)b & Sdm\\ Right ascension (J2000) & $10^h 10^m 09\fs869$ & $10^h 10^m 03\fs98$ & $10^h 10^m 02\fs32$\\ Declination (J2000) & $-12\degr 26' 01\farcs6$ & $-12\degr 34' 52\farcs9$ & $-12\degr 38' 51\farcs6$\\ $v_{\rm sys}$(helio) (km s$^{-1}$) & $3652 \pm 6$ & $3536$ & $3586 \pm 30$ \\ Isophotal major diameter $D_{25} $ & $194'' \pm 13''$ & $52'' \pm 8.7''$ & $102'' \pm 16.6''$ \\ $D_{25}$ diameter (kpc) & 51 & 14 & 27\\ $B_T$ (RC3) (mag) \tablenotemark{b} & $12.54 \pm 0.13$ & $14.9 \pm 0.20$ & \nodata\\ Corrected $B_T^0$ & 11.82 & 14.46 & \nodata\\ Galactic $A_B$ & 0.234 & 0.244 & 0.25\\ $M_B$ (mag) & $-21.87$ & $-19.2$ & \nodata\\ $B$ (Fick Obs.) (mag)\tablenotemark{b} & \nodata\ & 15.2 & 16.6\\ $R$ (Fick Obs.) (mag)\tablenotemark{b} & \nodata\ & 13.7 & 15.3\\ $(B - V)_T^0$ & 0.63 & 0.50 & \nodata\\ $(K_s)_{\rm total}$ (mag) & $8.62 \pm 0.029$ & $11.46 \pm 0.11$ & \nodata \\ $J_{\rm total} $ & $9.59 \pm 0.017$ & $12.20 \pm 0.05$ & \nodata\\ $i$ from axis ratio & $55\degr$ & $35\degr$ & $68\degr$\\ Major Axis PA from $K_s$ & $205\degr$ & $225\degr$ & \nodata\\ Projected separation from NGC 3145 (kpc) & 0 & 141 & 204\\ Distance (Mpc) & 54.8 & 54.8 & 54.8\\ Scale (pc per $1''$) & 262 & 262 & 262\\ \enddata \tablenotetext{a} {$v_{sys}$ of PGC 029578 from \citet{zaritsky97} (but is not consistent with our HI value) and $v_{sys}$ of NGC 3143 from \citet{schweizer87}. Aside from the Fick observatory values of $B$ and $R$ for the companions, the rest of the data are from \citet{deVau91} or 2MASS}. \tablenotetext{b} {Not corrected for extinction} \end{deluxetable*} NGC 3145 is one of the 11 galaxies in the seminal study by \citet{rubin78} on extended rotation curves of high-luminosity spiral galaxies. \citet{rubin82} include NGC 3145 in their study of the rotational properties of Sb/bc galaxies from optical observations of the rotation curves along the major axis. Their data are used by \citet{persic91} in fitting a universal rotation curve to spiral galaxies. Our \ion{H}{1}\ observations are the first determination of the velocity field image of NGC 3145 and the kinematic parameters derived from it. It is important to know if gas flows perpendicular to the disk have a significant effect on its rotation curve. From the NASA/IPAC Extragalactic Database (NED), we adopt a redshift-based distance for NGC 3145 of 54.8 Mpc, which is cosmology-corrected to the 3 K microwave background reference frame with $H_0$ = 73 kpc s$^{-1}$ Mpc$^{-1}$, $\Omega_m$ = 0.27, and $\Omega_\lambda$ = 0.73 . Then $1''$ = 262 pc. Table\,\ref{basic.optical} lists basic optical and near-infrared properties of the NGC 3145 triplet from the NASA/IPAC Extragalactic Database and includes data from the Third Reference Catalogue \citep{deVau91} and from the Two Micron All Sky Survey ({\it 2MASS}) \citep{skrut06}. In the literature, there is not very much data about the two companions, and the determination of their systemic velocities has had a checkered history. Our \ion{H}{1}\ observations resolve the issues about the systemic velocities of the companions and provide values of their \ion{H}{1}\ masses. Our velocity field image of PGC 029578 allows us to calculate its dynamical mass. All of the velocities listed in this paper are heliocentric and use the optical definition of the nonrelativistic Doppler shift. Section 2 describes our VLA observations, data reduction, and additional images used in this paper. Section 3 discusses the broad-band optical, H$\alpha$, radio continuum, and infrared properties of NGC 3145. Section 4 presents the \ion{H}{1}\ properties of NGC 3145. Section 5 presents our \ion{H}{1}\ results on the two companions. Section 6 summarizes and discusses our conclusions. A simple analytic model for the encounter is given in the Appendix. \section{\label{observe} Observations and data reduction} \subsection{\ion{H}{1}\ Observations} We observed the NGC 3145 triplet in \ion{H}{1}\ at the VLA for 4.75 hr (on target) in C configuration on 1993 July 16 and for 1 hr in D configuration on 1993 October 31. The phase calibrator was 0941-080. For the D configuration observations 3C 286 served as the flux standard and bandpass calibrator. For the C configuration observations 3C 48 and 3C 286 were the flux standards and bandpass calibrators. The phase center was R.A., decl. (2000) = 10 10 06.915, -12 29 46.80. We used on-line Hanning smoothing. \ion{H}{1}\ emission is present for heliocentric velocities 3407 to 3919 km s$^{-1}$. For the correlator mode, we adopted the same type of 4IF (intermediate frequency) mode used by \citet{elmegreen95} for \ion{H}{1}\ observations of NGC 2207/IC 2163. Thus we made simultaneously (a) observations with a channel width of 21.13 km s$^{-1}$\ in IFC and IFD (left circular polarization) and (b) observations with a channel width of 5.28 km s$^{-1}$\ in IFA and IFB (right circular polarization). The line-free channels of the data taken with 21.13 km s$^{-1}$\ velocity resolution provide the continuum to subtract from the higher velocity-resolution data. The latter had no line-free channels at the low frequency end and only a short range of line-free channels at the high frequency end. In retrospect, this was not the best choice of correlator mode for the NGC 3145 observations because of beam squint, as explained below. The AIPS software package was used for the data reduction and analysis. The 4 IFs plus two configurations resulted in 8 $uv$ data sets. In each of these, two channels suffered radio frequency interference (RFI) which occurred at different frequencies in the various data sets. To correct for this, we made a continuum image from the line-free channels of IFC and IFD and subtracted the clean components of the brighter continuum sources from every channel in each of the 8 $uv$ data sets. Then for the channels affected by RFI, we interactively clipped any signal above 1 to 2 Jy. For each $uv$ data set separately, we generated ``dirty'' data cubes of line plus residual continuum emission, subtracted the residual continuum obtained from IFC and IFD, and cleaned the cubes. Then combining the data from C and D configurations, we repeated the mapping and cleaning and merged the IFs to create one cube with 21.13 km s$^{-1}$\ velocity resolution and one with 5.28 km s$^{-1}$\ velocity resolution. After inspection, we decided to omit the 5.28 km s$^{-1}$\ velocity resolution data taken with D configuration because beam squint, i.e., a slight difference in pointing between the left and right polarization detectors, caused a problem for the D-configuration data when the continuum derived from IFC plus IFD was subtracted from the observations made with IFA and IFB. Since beam squint is less of a problem for C-configuration, we kept the C-configuration data cubes with 5.28 km s$^{-1}$\ velocity resolution, but to increase the S/N we averaged the channels to 10.57 km s$^{-1}$\ velocity resolution. \begin{deluxetable*}{lcccc} \tabletypesize{\scriptsize} \tablecaption{Final \ion{H}{1}\ Subcubes\label{HIcubes}} \tablewidth{0pt} \tablehead{ \colhead{Parameter} & \colhead{Cube 1} & \colhead{Cube 2} & \colhead{Cube 3} & \colhead{Cube 4} } \startdata Configuration & C + D & C + D & C & C\\ Channel width (km s$^{-1}$) & 21.13 & 21.13 & 10.57 & 5.28\\ Weighting & Natural & Robust 0.1 & Natural & Natural\\ PSF (FWHM, PA) & $30\farcs6 \times 18\farcs4, -31\degr$ & $22\farcs0 \times 14\farcs7, - 28\degr$ & $ 27\farcs3 \times 16\farcs6, - 29\degr$ & $27\farcs3 \times 16\farcs6, -29\degr$\\ Pixel Size & $5''$ & $5''$ & $5''$ & $5''$\\ Number of channels & 32 & 32 & 52 & 104\\ $T_b/I$ (K/mJy\thinspace beam$^{-1}$) & 1.10 & 1.92 & 1.37 & 1.37\\ $\sigma_{\rm rms}$ (mJy\thinspace beam$^{-1}$)\tablenotemark{a} & 0.50 & 0.58 & 0.74 & 1.04\\ \enddata \tablenotetext{a} {rms noise per channel} \end{deluxetable*} To select areas of genuine \ion{H}{1}\ emission, we convolved the cube made with natural weight and 21.1 km s$^{-1}$\ channels to $60''$ resolution, clipped it at 2.5 times its rms noise, and retained regions of emission only if the feature appeared in at least two adjacent velocity channels. The resulting cube was applied as a blanking mask to the other cubes. Table\,\ref{HIcubes} lists properties of our four final \ion{H}{1}\ data subcubes. The one with highest sensitivity is Cube 1, made from C plus D configuration data with natural weighting and 21.1 km s$^{-1}$\ velocity resolution. The one with highest spatial resolution is Cube 2, made from C plus D configuration data with Robust = 0.1 in the AIPS task IMAGR and 21.1 km s$^{-1}$\ velocity resolution. The cube with highest velocity-resolution (and the largest rms noise) is Cube 4 made from C configuration data alone with natural weighting and 5.28 km s$^{-1}$\ velocity resolution. Cube 3 was made from Cube 4 by averaging the channels to 10.57 km s$^{-1}$\ velocity resolution to reduce the rms noise. For the line-of-sight \ion{H}{1}\ column density image displayed in the figures below, we convolved the zeroth moment image made from Cube 2 to a circular $22''$ (HPBW) beam and corrected for primary-beam attentuation. This correction factor has a mean value of 1.04 at the location of NGC 3145, 1.07 at the location of NGC 3143, and 1.26 at the location of PGC 029578. For NGC 3145, we find the integrated \ion{H}{1}\ line flux $S$(HI) is 20.4 Jy km s$^{-1}$, which corresponds to an \ion{H}{1}\ mass $M$(HI) of $1.4 \times 10^{10}$ $M_{\sun}$. Our value of $S$(HI) for NGC 3145 is 30\% greater than the \ion{H}{1}\ Parkes All Sky Survey value \citep{doyle05} of 15.4 Jy km s$^{-1}$. For NGC 3143, we find $S$(HI) is 0.91 Jy km s$^{-1}$, and thus $M$(HI) = $6.5 \times 10^8$ $M_{\sun}$. As we shall see below, NGC 3143 is somewhat deficient in \ion{H}{1}. For PGC 029578, we find $S$(HI) is 4.7 Jy km s$^{-1}$, and thus $M$(HI) = $3.3 \times 10^9$ $M_{\sun}$. \subsection{Radio Continuum Observations at $\lambda 6$ cm} With the VLA, we observed NGC 3145 in the radio continuum at a central frequency of 4860.1 MHz for 1 hr (on target) in C configuration on 1994 October 21 and for 48 min in D configuration on 1995 May 13. The observations were made with one IF pair at 4885.1 MHz and the other at 4835.1 MHz, each with a 50 MHz bandwidth. The phase center was R.A., decl.(2000) = 10 10 09.995, -12 26 01.90, within $2''$ of the NGC 3145 nucleus. The phase calibrator was 0941-080, and the flux calibrator was 3C 286. The smaller galaxies NGC 3143 and PGC 029578 were too far from the phase center to be detected. The AIPS software was used for the data reduction. After calibrating the $uv$ data from each VLA configuration separately and checking the separate maps, we combined the $uv$ data from the two configurations and ran the AIPS task IMAGR with ROBUST = 0 to make and clean a map with synthesized beam $7.4'' \times 5.5''$ (HPBW) and BPA = $12\degr$. A surface brightness of 1 mJy\thinspace beam$^{-1}$\ corresponds to $T_b$ = 1.275 K, and the rms noise $\sigma_{\rm rms}$ is 0.0224 mJy\thinspace beam$^{-1}$, equivalent to $T_b$ = 0.029 K. The mean correction for primary beam attenuation in NGC 3145 is a factor of 1.009. We also convolved this image to a circular beam of $7.5''$ (HPBW). In this image, displayed in the figures below, a surface brightness of 1 mJy\thinspace beam$^{-1}$\ corresponds to $T_b$ = 0.920 K and the rms noise is 0.0228 mJy\thinspace beam$^{-1}$, equivalent to $T_b$ = 0.021 K. For NGC 3145 we find a total flux density $S_\nu$(4.86 GHz) = $9.1 \pm 0.3$ mJy. \subsection{Additional Data} We use H$\alpha$, broad-band optical, and infrared images that other observers and facilities have made available on-line. Table\,\ref{PSF} lists the FWHM of the point-spread functions (PSFs) of these images. The continuum-subtracted H$\alpha$\ image of NGC 3145 is from \citet{banfi93} and is not flux-calibrated. The $B$ image in Figure\,\ref{B.Features} is from \citet{sandage94}. Whenever we refer to a $B$ image of NGC 3145 without further specification, we mean this image. We use the other $B$ images described below when we need a larger field. We obtained a WFPC2, F606W ($R$-band) Hubble Space Telescope ({\it HST}) image from the {\it Hubble Legacy Archive}\footnote{Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA)}. \citet{martini03} made these {\it HST} observations to study circumnuclear dust, but in addition to the central regions, the field covers the southern half of NGC 3145. \begin{deluxetable}{lc} \tabletypesize{\scriptsize} \tablecaption{Resolution of Images\tablenotemark{a} \label{PSF}} \tablewidth{0pt} \tablehead{ \colhead{Image} & \colhead{FWHM of PSF} } \startdata Carnegie Atlas of Galaxies $B$ & $1.3'' \times 1.2''$\\ {\it MDM} $B$ & $2.3'' \times 1.9''$\\ {\it MDM} $V$ & $1.9'' \times 1.6''$\\ {\it MDM} $R$ & $1.7'' \times 1.5''$\\ Burrell-Schmidt $B$ & $4.1''$\\ {\it HST} WFPC2 $R$ & $0.20'' \times 0.18''$\\ DSS IIIaJ & $5.4'' \times 4.4''$\\ H$\alpha$\ & $\sim 1''$\\ $\lambda 6$ cm\ radio continuum & $7.4'' \times 5.5''$\\ $\lambda 6$ cm\ radio continuum & $7.5''$\\ $N$(HI) & $22''$\\ {\it WISE} 12 \micron\ & $9.2'' \times 8.4''$\\ {\it WISE} 22 \micron\ & $18'' \times 17''$\\ \enddata \tablenotetext{a} {Except in the case of the radio images and the H$\alpha$\ image, the FWHM values are from fitting two-dimensional Gaussians to stellar images in the optical or WISE 3.4 \micron\ image. For the H$\alpha$\ image, we list the value of the estimated seeing from \citet{banfi93}. } \end{deluxetable} We obtained from the NASA/IPAC Infrared Science Archive infrared images of NGC 3145 taken by the Wide-field Infrared Survey Explorer ({\it WISE}). We also use data from {\it 2MASS}. Paul Eskridge took $B$, $V$, and $R$ images of NGC 3145 for us at the 1.3 m telescope of the Michigan-Dartmouth-M.I.T. Observatory (MDM) on 1999 March 25. These images, taken under non-photometric conditions, are a little trailed and a little underexposed. We took a $B$ image with the Burrell-Schmidt telescope at Kitt Peak on 1993 January 29 and use this for NGC 3143. These images were bias subtracted, flat-fielded, combined, and sky-subtracted with standard IRAF procedures. For PGC 029578 we use a IIIaJ (4680 \AA) Digitized Sky Survey (DSS) image from Space Telescope Science Institute. Philip Appleton took $B$ and $R$ images for us of NGC 3143 and PGC 020578 at the 0.6 m telescope of the Fick Observatory of Iowa State University, and he measured the (Johnson) $B$ and $R$ magnitudes listed in Table\,\ref{basic.optical} for these two galaxies. \section{Broad-band Optical, H$\alpha$, Radio Continuum, and Infrared Properties of NGC 3145} Throughout this section we use for deprojection of NGC 3145 the values of the position angle (PA) of the major axis and the inclination $i$ of the disk from Table\,\ref{basic.optical}. Our \ion{H}{1}\ kinematic data in Section 4.2 find a value for the major axis PA consistent with the isophotal value in this table and a somewhat smaller value of $i$. \subsection{Broad-band Optical} \begin{figure*} \epsscale{0.9} \plotone{fig3.eps} \caption{Greyscale polar coordinate display of the $B$ image of Figure\,\ref{B.Features} after deprojection into the plane of NGC 3145. The azimuth $\theta$ is measured ccw from the NNE major axis. Some notable features are the {\it triangle} and the apparent {\it branch} of the main spiral arm starting at Feature $d$, as marked in Figure\,\ref{B.Features}. The labels ``branch" and ``triangle" indicate the approximate values of $\theta$ for these features. Apex $a$ of the {\it triangle} has $\theta \simeq 146\degr$. The arm connecting apices $b$ and $c$ of the {\it triangle} winds in opposite sense to all the other arms. \label{az}} \end{figure*} The {\it HST} $R$-band image of the southern half of NGC 3145, displayed in the bottom panel of Figure\,\ref{B.Features}, affords a detailed view of some of the optical oddities marked in the top panel of that figure. The larger box encloses the {\it triangle}, and the smaller box is on the western antenna. Sets of \ion{H}{1}\ line-profiles from these two boxes will be displayed and discussed in Section 4.2.2. Heading north from apex $c$ of the {\it triangle} is a region of complex dust loops and feathers. {\it HST} images of Sb and Sc galaxies often reveal dust feathers crossing spiral arms \citep{la vigne06}. Numerical models of spiral galaxies by \citet{kim02}, \citet{kim06}, and \citet{shetty06} reproduce feathers and spurs as sheared structures resulting from magneto-Jeans instabilities as gas flows through a spiral shock, and \citet{lee12} and \citet{lee14} present an analytic discussion of the feathering instability of spiral arms for a thin disk with magnetic fields and self-gravitating gas. The dust feathers in this region of NGC 3145 are spaced 0.3 to 0.7 kpc apart and are probably produced by this mechanism. It is not clear whether this mechanism accounts for the dust loops in this part of NGC 3145. We shall refer to this region as the region of complex dust loops. Sandage's peculiar arm, which outlines the eastern edge of the {\it triangle}, is prominent in the HST image. Crossed by a number of dust feathers, it has a dust lane along part of its outer edge and another along part of its inner edge. This arm looks to be in front of the inner spiral arm, which it crosses at apex $a$, and thus in front of the disk. Figure\,\ref{az} is a display of the $B$ image in polar coordinates after deprojection into the plane of NGC 3145. In this image, the {\it triangle} is a prominent feature, and Sandage's peculiar arm has quite a different slope than the spiral arms, consistent with it being a material arm in front of the disk. The {\it branch} starting at Feature $d$ is also visible in this image. Its slope in this polar-coordinate diagram is about the same as that of Sandage's peculiar arm between apices $a$ and $b$. The arm which outlines the western edge of the {\it triangle} and connects apices $c$ and $b$ winds in opposite sense to every other arm in NGC 3145 (see Figures\,\ref{B.Features} and \,\ref{az}); it increases in radius cw whereas all the other arms increase in radius ccw. It is less prominent in the $R$-band {\it HST} image than in the $B$ image of Figures\,\ref{B.Features} and \,\ref{az}. In our MDM images of NGC 3145, this arm decreases in prominence from $B$ to $V$ to $R$. This suggests it is dominated in $B$ by somewhat younger stars. In Figure\,\ref{B.Features} there is also a string of bluish clumps crossing Sandage's peculiar arm at Feature $f$ ($6''$ north of apex $a$) and continuing, quasi-parallel to the northern edge of the {\it triangle}, to the southern part of the region of compex dust loops. This string of clumps is not evident in the {\it HST} image. \begin{figure*} \epsscale{1} \plottwo{fig5.eps}{fig6.eps} \caption{Left: Greyscale plus contour display of the $\lambda 6$ cm\ radio continuum image of NGC 3145 with $7.5''$ resolution. Contour levels are at (3, 5, 6, 7, 9) $\times$ the rms noise. The rms noise is 0.0228 mJy\thinspace beam$^{-1}$, equivalent to $T_b$ = 0.021 K. See text about Feature $f$. Right: Greyscale display of H$\alpha$\ image overlaid with contours of $\lambda 6$ cm\ radio continuum emission. Several of the $\lambda 6$ cm\ radio knots coincide with H$\alpha$\ clumps. The cross marks the location of the nucleus. \label{6cm}} \end{figure*} \begin{figure*} \epsscale {0.7} \plotone{fig7a.eps} \caption{Greyscale display of $B$ image of NGC 3145 overlaid with contours of $\lambda 6$ cm\ radio emission from Figure\,\ref{6cm}. {\it Apices $a$ and $b$ of the triangle are not prominent in the $\lambda 6$ cm\ radio.} North of Feature $f$ (marked in Figure\,\ref{6cm}) there is extended $\lambda 6$ cm\ radio emission along Sandage's peculiar arm. \label{B.6cm}} \end{figure*} From {\it 2MASS} $K_s$ isophotes, the position angle (PA) of the major axis of the bar is 60\degr. The region of complex dust loops and the location where the {\it branch} (feature $d$ in Figure\,\ref{B.Features}) departs from the spiral arm are on opposite sides of the galaxy at somewhat different distances from the nucleus and roughly along the same PA as the major axis of the bar. \subsection{Radio Continuum, H$\alpha$, and mid-IR} For NGC 3145, the NRAO/VLA Sky Survey (NVSS) \citep{condon98} lists a total flux density $S_\nu$(1.4 GHz) of $21.5 \pm 2.3$ mJy. Since we find $S_\nu$(4.86 GHz) = $9.1 \pm 0.3$ mJy, the global spectral index $\alpha$ of NGC 3145 = $-0.7 \pm 0.1$ , which is typical of normal spirals. For normal galaxies with a spectal index in this range, the expression in \citet{condon92} for the ratio of free-free to total flux density gives a thermal fraction at $\lambda 6$ cm\ for a galaxy as a whole of 20\% to 25\%. Usually in spiral galaxies, the extended emission along the spiral arms is predominantly nonthermal, and only where knots occur is the emission mainly free-free. Young star-forming complexes appear as H$\alpha$\ sources and knots of $\lambda 6$ cm\ radio continuum emission in NGC 3145. The $\lambda 6$ cm\ radio continuum is also used to search for possible shock fronts. The left panel in Figure\,\ref{6cm} displays in greyscale and contours the $\lambda 6$ cm\ radio continuum image with $7.5''$ (= 2.0 kpc) resolution. In the right panel, these contours are overlaid on a greyscale display of the H$\alpha$\ image (which has a resolution of $\sim 1''$ = 260 pc). In Figure\,\ref{B.6cm}, the $\lambda 6$ cm\ contours are overlaid on the $B$ image in greyscale to show where the radio continuum emission is located relative to the features marked in Figure\,\ref{B.Features}. The $\lambda 6$ cm\ knots tend to lie along the brighter parts of the $B$-band spiral arms, and several coincide with H$\alpha$\ clumps, but not all of the brighter H$\alpha$\ clumps are seen as $\lambda 6$ cm\ emission knots. The two brightest $\lambda 6$ cm\ clumps lie on the inner and brighter of the two $B$-band spiral arms in the north and have flux densities $S_\nu$(6 cm) = $0.44 \pm 0.04$ mJy for the northwestern clump and $0.41 \pm 0.04$ mJy for the northern clump. Neither of these is at the ``X''-features marked in Figure\,\ref{B.Features}. \begin{figure} \epsscale{0.70} \plotone{fig7b.eps} \caption{In greyscale the portion of the {\it HST} image containing the region of complex dust loops and centered on R.A., decl.(2000) = 10 10 09.006, $-12$ 26 11.80. This is overlaid with contours at (3, 4, 5, 6, 7) $\times$ the rms noise from the $\lambda 6$ cm\ radio continuum image with $7.4'' \times 5.5''$ (HPBW) resolution. The rms noise is 0.0224 mJy\thinspace beam$^{-1}$, equivalent to $T_b$ = 0.029 K. Some of the extended $\lambda 6$ cm\ emission here may be from shocks associated with the dust loops and feathers. \label{HST.6cm}} \end{figure} In Figure\,\ref{HST.6cm}, contours from our $\lambda 6$ cm\ radio continuum image with higher resolution ($7.4'' \times 5.5''$ HPBW) are overlaid on the portion of the $HST$ image containing the region of complex dust loops. This region has $\lambda 6$ cm\ flux density $S_\nu$(6 cm) = $0.73 \pm 0.06$ mJy. Although there is H$\alpha$\ emission here, some of the extended $\lambda 6$ cm\ emission may be from shocks associated with the dust loops and feathers. The $\lambda 6$ cm\ clump centered where a large dust loop or shell seems to cross another feature here has $S_\nu$(6 cm) = $0.21 \pm 0.03$ mJy. \begin{figure*} \epsscale{0.9} \plottwo{fig9a.eps}{fig9b.eps} \caption{Left panel: Contours of {\it WISE} 12 \micron\ emission overlaid on $\lambda 6$ cm\ radio continuum in greyscale. The locations of the nucleus, apex $a$, and apex $b$ of the {\it triangle} are denoted by a plus sign, a five-pointed star, and a small triangle symbol, resp. Right panel: Greyscale plus contour display of the {\it WISE } 12 \micron\ image. Unlike the $\lambda 6$ cm\ radio emission, the distribution of 12 \micron\ emission is not somewhat ring-like. \label{12micron}} \end{figure*} The following properties of Sandage's peculiar arm, the {\it triangle}, and the ``X''-features are apparent from a comparison of Figures\,\ref{6cm},\, \ref{B.6cm}, and \,\ref{B.Features}. There is a radio continuum knot at Feature $f$ (labelled in Figure\,\ref{6cm}), which is where the string of blue clumps intersects Sandage's peculiar arm. North of Feature $f$ there is extended $\lambda 6$ cm\ radio continuum emission along Sandage's peculiar arm. This is probably mainly synchrotron emission from shocked gas, since extended radio emission from arms in galaxies is usually nonthermal. We interpret this $\lambda 6$ cm\ emission feature to mean that the portion of Sandage's arm from Feature $f$ northwards is in the disk. The H$\alpha$\ emission, including the Feature $f$ knot, is faint, perhaps due to extinction, but the 12 \micron\ emission (see Figure\,\ref{12micron}) from this general area is relatively bright. The $\lambda 6$ cm\ radio emission here contrasts with that from the {\it triangle}. Aside from an H$\alpha$\ and $\lambda 6$ cm\ radio continuum star-forming clump between apices $a$ and $b$ on Sandage's peculiar arm, the {\it triangle}, including apices $a$ and $b$, is not prominent in the $\lambda 6$ cm\ radio continuum. Emission at $\lambda 6$ cm\ from the region of complex dust loops partly overlaps apex $c$, but the latter does not appear as a distinct clump at $\lambda 6$ cm. We conclude that {\it there are no shocks at the arm-crossing ``X'''s of the triangle, and thus the arms that appear to cross at the apices of the triangle must be in different planes.} Thus, whereas the part of Sandage's peculiar arm from Feature $f$ northwards appears to be in the disk, the part of this arm that forms the eastern side of the {\it triangle} is not in the disk. The distribution of the brighter $\lambda 6$ cm\ emission in NGC 3145 has a somewhat ring-like appearance, composed of the brighter portions of the $B$-band spiral in the northern half of the galaxy, the section of Sandage's peculiar arm north of Feature $f$, and connections between these (see Figures\,\ref{6cm} and \,\ref{B.6cm}). This contrasts with the distribution of 12 \micron\ emission. In Figure\,\ref{12micron}, contours of the {\it WISE} 12 \micron\ emission are overlaid on the $\lambda 6$ cm\ radio image in greyscale. The locations of the nucleus, apex $a$, and apex $b$ are denoted by a plus sign, a five-pointed star, and a small triangle symbol, resp. The brightest 12 \micron\ sources are at the locations of a) the northern $\lambda 6$ cm\ clump on the spiral arm north of the nucleus, b) the region of complex dust loops, and c) the nucleus. Aside from these three sources, the brightest 12 \micron\ emission is from the central hole in the distribution of $\lambda 6$ cm\ emission. Similarly, with the {\it WISE} 22 \micron\ image, whose resolution is a factor of 2 worse than the 12 \micron\ image, the brightest emission is from the central hole in the $\lambda 6$ cm\ distribution and from the northern $\lambda 6$ cm\ clump on the spiral arm. Thus there is a lot of warm dust in the inner disk/bar/bulge of NGC 3145 but not a similar concentration of cosmic-ray electrons. In Section 3.1 we noted that the arm outlining the western edge of the {\it triangle} decreases in prominence from $B$ to $V$ to $R$. Since this arm is not prominent in H$\alpha$\ either, it appears that optical emission from this arm may be dominated by $A$ and/or $B $ stars. This may help constrain the age of this feature. Feature $d$ in Figure\,\ref{B.Features} marks where the {\it branch} departs from the main spiral arm on the northeastern side of NGC 3145. Since there is no string of H$\alpha$\ knots along the {\it branch}, it does not meet the definition of a spur as given by \citet{la vigne06}. A clump prominent in H$\alpha$\ and $\lambda 6$ cm\ emission lies on the main spiral arm near this point. In Section 4.2.2, we shall see that there is a large difference in velocity between the spiral arm and the {\it branch} at feature $d$. \section{\ion{H}{1}\ Properties of NGC 3145} \subsection{\ion{H}{1}\ Images} Figure\,\ref{HI.triplet} displays the line-of-sight column density $N$(HI) image of the three galaxies in the system in greyscale and contours with $22''$ resolution. Table\,\ref{HIproperties} lists the basic \ion{H}{1}\ properties of the NGC 3145 triplet from our observations. \begin{figure*} \epsscale{0.747} \plotone{fig10.eps} \caption{Greyscale plus contour display of the $N$(HI) image of the NGC 3145 triplet with resolution = $22''$. Contour levels are at 100, 200, 400, 500, 600, and 800 Jy\thinspace beam$^{-1}$ m\thinspace s$^{-1}$, where 100 Jy\thinspace beam$^{-1}$ m\thinspace s$^{-1}$\ corresponds to a line-of-sight column density $N$(HI) = $2.3 \times 10^{20}$ atoms\thinspace cm$^{-2}$. The \ion{H}{1}\ distribution in NGC 3145 is not axisymmetric; there is a trough at $6''$ to $26''$ southeast of the nucleus and three major \ion{H}{1}\ concentrations (NE, SW, and NW of the nucleus.) \label{HI.triplet}} \end{figure*} \begin{figure*} \epsscale{0.68} \plotone{fig11.eps} \plotone{fig12.eps} \caption{Top: $B$ image of NGC 3145 in greyscale overlaid with $N$(HI) contours. Contour levels are at 200, 400, 500, 600, 700, 800 Jy\thinspace beam$^{-1}$ m\thinspace s$^{-1}$, where 100 Jy\thinspace beam$^{-1}$ m\thinspace s$^{-1}$\ corresponds to $N$(HI) = $2.3 \times 10^{20}$ atoms\thinspace cm$^{-2}$. The major \ion{H}{1}\ concentration northeast of the nucleus is elongated to the south, not along the main spiral arm but along the {\it branch} or a bit to the east of it. Bottom: Contours of the first moment image of NGC 3145 overlaid on the {\it MDM} $B$ image in greyscale, with a plus symbol at the nucleus. The contour interval is 20 km s$^{-1}$. Along the {\it branch} there are kinks in the isovelocity contours. \label{N(HI).N3145}} \end{figure*} \begin{deluxetable*}{lccc} \tabletypesize{\scriptsize} \tablecaption{Basic \ion{H}{1}\ Properties of the NGC 3145 Triplet \label{HIproperties}} \tablewidth{0pt} \tablehead{ \colhead{Characteristic} & \colhead{NGC 3145} & \colhead{NGC 3143} &\colhead{PGC 029578} } \startdata Velocity Range of HI emission (km s$^{-1}$) & 3407 to 3919 & 3491 to 3581 & 3407 to 3623\\ Heliocentric $v_{\rm sys}$ (km s$^{-1}$) & $3655.9 \pm 0.2$ & $3530 \pm 5$ & $3512 \pm 5$\\ PA of receding major axis:\\ Kinematic & $205.44\degr \pm 0.06\degr$ & \nodata & $270\degr \pm 3\degr$\\ Isophotal & $204\degr \pm 2\degr$ & $235\degr \pm 5\degr$ & $283\degr \pm 5\degr$\\ PA of kinematic minor axis & $-60\degr \pm 2\degr$ & $\sim 0\degr$ & $173\degr \pm 3\degr$\\ Kinematic $i$ & $50.3\degr \pm 0.3\degr$ & \nodata & \nodata\\ Isophotal $i$ & $55\degr \pm 1\degr$ & $\leq 35\degr$ & $61\degr \pm 2\degr$\\ HI diameter & $262'' \pm 2''$ & $52'' \pm 2''$ & $122'' \pm 2''$\\ HI diameter/$D_{25}$ & $1.35 \pm 0.09$ & $1.00 \pm 0.17$ & $1.20 \pm 0.20$\\ Integrated HI flux $S$(HI) (Jy km s$^{-1}$) & 20.4 & 0.91 & 4.72 \\ $M$(HI) ($M_{\sun}$) & $1.44 \times 10^{10}$ & $6.5 \times 10^8$ & $3.33 \times 10^9$\\ Dynamical mass $M_{\rm dyn}$ ($M_{\sun}$)\tablenotemark{a} & $5.0 \times 10^{11}$ & \nodata & $2.2 \times 10^{10}$ \\ $M$(HI)/$M_{\rm dyn}$\tablenotemark{a} & 0.029 & \nodata & 0.13 \\ Maximum HI column density (atoms\thinspace cm$^{-2}$) & $2.3 \times 10^{21}$ & $7.0 \times 10^{20}$ & $1.8 \times10^{21}$\\ \enddata \tablenotetext{a} {out to $R$ = $127''$ = 33 kpc for NGC 3145, and $R$ = $50''$ =13 kpc for PGC 029578} \end{deluxetable*} \begin{figure*} \epsscale{0.8} \plottwo{fig13a.eps}{fig13b.eps} \caption{Left: Velocity field contours overlaid on a greyscale display of the velocity dispersion image before correction for the velocity gradient across the beam. Right: $N$(HI) contours of NGC 3145 from Figure\,\ref{HI.triplet} overlaid on greyscale display of the velocity dispersion image corrected for the velocity gradient. In both panels the units on the greyscale wedge are km s$^{-1}$, and the locations of the nucleus, apex $a$ and apex $b$ of the {\it triangle} are marked by the plus symbol, the five-pointed star, and the small triangle symbol, resp. Blanked pixels are white. \label{vel.disp}} \end{figure*} In the top panel of Figure\,\ref{N(HI).N3145} $N$(HI) contours are overlaid on the $B$ image of NGC 3145 in greyscale. The distribution of \ion{H}{1}\ emission from NGC 3145 is not axisymmetric; it consists of three major concentrations plus fainter extended emission and a trough $6''$ to $26''$ southeast of the nucleus. The brightest \ion{H}{1}\ concentration is northeast of the nucleus and includes Feature $d$. This region has an \ion{H}{1}\ mass $M$(HI) = $1.8 \times 10^9$ $M_{\sun}$. It is elongated to the south, not along the main spiral arm but along the {\it branch} or a bit to the east of it. At $20''$ south of Feature $d$, the \ion{H}{1}\ emission from the {\it branch} is still bright, whereas the \ion{H}{1}\ emission from the spiral arm is near the trough. We conclude in Section 4.2.2 that this major \ion{H}{1}\ concentration cannot be a single entity. The second brightest \ion{H}{1}\ concentration is southwest of the nucleus and has $M$(HI) = $1.4 \times 10^9$ $M_{\sun}$\ if we omit its southern tail along the spiral arm. The third brightest \ion{H}{1}\ concentration lies on the northwestern part of the two northern spiral arms, and its brightest part coincides with the most luminous $\lambda 6$ cm\ clump in the galaxy. Its $M$(HI) = $6 \times 10^8$ $M_{\sun}$. These \ion{H}{1}\ masses are comparable to those of massive \ion{H}{1}\ clouds in a number of interacting galaxy pairs - see \citet{kaufman99} and references therein. This suggests that the massive \ion{H}{1}\ concentrations in NGC 3145 could have resulted from an encounter. The total $M$(HI) of NGC 3145 is $1.4 \times 10^{10}$ $M_{\sun}$. Its ratio of $M$(HI)/$L_{\rm B}$ = 0.17 $M_{\sun}$ $L_{\sun}^{-1}$, which is close to the median value for galaxies of the same Hubble type (Sbc) in \citet{roberts94}. The NGC 3145 first moment image (often called the velocity field) is displayed as contours overlaid on the {\it MDM} $B$ image in greyscale in the bottom panel of Figure\,\ref{N(HI).N3145}. The velocity dispersion image, uncorrected for the velocity gradient across the synthesized beam, is displayed in greyscale with velocity-field contours overlaid in the left panel of Figure\,\ref{vel.disp}, and the velocity dispersion image after correction for the velocity gradient is displayed in greyscale with $N$(HI) contours overlaid in the right panel of Figure\,\ref{vel.disp}. The first moment and velocity dispersion images are from the cube with lowest rms noise (Cube 1) and are blanked where $N$(HI) $\leq 1.9 \times 10^{20}$ atoms\thinspace cm$^{-2}$. A plus symbol marks the location of the nucleus. In the velocity-dispersion figures (and in other figures below), the locations of apex $a$ and apex $b$ of the {\it triangle} are denoted by the five-pointed star and the small triangle symbol, resp. The velocity field exhibits some mild irregularities but not the highly distorted velocity fields found in some interacting galaxies, e.g. NGC 2207 \citep{elmegreen95} or NGC 2535 \citep{kaufman97}. Along the {\it branch} there are kinks (or wiggles) in the velocity contours. At the arms where the $\lambda 6$ cm\ radio emission is prominent in Figure\,\ref{6cm}, there is no evidence from Figure\,\ref{N(HI).N3145}, or from one made from the C-array data with 10 km s$^{-1}$\ velocity resolution, of kinks (or wiggles) in the isovelocity contours due to streaming motions. If the radial component of streaming motions dominates, such kinks should be {\bf u}-shaped on the western side and {\bf n}-shaped on the eastern side of the galaxy. Either the somewhat ring-like distribution of $\lambda 6$ cm\ emission is not expanding or our \ion{H}{1}\ observations have too low a spatial resolution to reveal an expansion. A resonance ring produced by a bar would not be expanding. The velocity dispersion image that is corrected for the velocity gradient across the synthesized beam has particularly high values for the velocity dispersion $\sigma_{\rm v}$\ at apex $a$ of the {\it triangle} and in the region of the complex dust loops. For the oddities labelled in Figure\,\ref{B.Features}, Section 4.2.2 compares numerical values of $\sigma_{\rm v}$\ before and after this correction. High values for the corrected $\sigma_{\rm v}$\ may result, in part, because the line-of-sight intercepts gas at various altitudes above the disk and at various radial distances, in addition to streaming motions on small spatial scales and turbulence within the disk. \subsection{Analysis of \ion{H}{1}\ Motions} \subsubsection{Large Scale Anomalies} We use the AIPS program GAL to fit a model rotation curve to the first moment image of NGC 3145 and determine its kinematic parameters. The routine assumes a flat disk in rotation. The residuals from such a fit may reveal whether significant parts of the the galaxy are not in a flat disk or have motions other than rotation. We first used GAL to fit a Brandt model \citep{brandt60} rotation curve to our first moment image of the disk as a whole. It finds a kinematic center within $1''$ of the optical nucleus, a systemic velocity $v_{\rm sys}$ = $3655.9 \pm 0.2$ km s$^{-1}$, close to the optical value of $3652 \pm 6$ km s$^{-1}$, and a position angle of the receding major axis PA = $205.4\degr \pm 0.06\degr$, consistent with the isophotal value of 205\degr\ from {\it 2MASS} $K_{\rm s}$ and from the $N$(HI) outer isophotes. \begin{figure*} \epsscale{0.68} \plotone{fig14.eps} \caption{Greyscale and contour display of velocity residuals from fitting a Brandt model rotation curve to the first moment image of NGC 3145 as a whole. The contours are at $20$ km s$^{-1}$\ and $-20$ km s$^{-1}$, and the units on the greyscale wedge are km s$^{-1}$. The locations of the nucleus, apex $a$ and apex $b$ are marked as in Figure\,\ref{vel.disp}. Lines are drawn to divide the galaxy into four quadrants: receding major axis (SSW), near-side minor axis (WNW), approaching major axis (NNE), and far-side minor axis (ESE). The boxes on the major axis are the $R_{\rm max}$ boxes where we fit the line-profile at each pixel with the sum of two Gaussians. \label{vel.resid}} \end{figure*} \begin{figure} \epsscale{0.90} \plotone{fig15c.eps} \caption{Brandt rotation curve fits to major-axis quadrants of the first moment image of NGC 3145 with the receding major-axis quadrant in red and the approaching major-axis quadrant in blue. \label{rot.curve}} \end{figure} \begin{deluxetable*}{lcccc} \tabletypesize{\scriptsize} \tablecaption{Velocity Parameters from fitting the data with GAL\label{galfit}} \tablewidth{0pt} \tablehead{ \colhead{Quadrant} & \colhead{$i$} & \colhead{$v_{\rm max}$} & \colhead{$R_{\rm max}$} & \colhead{$n_{\rm B}$} \\ & \colhead{($\arcdeg$)} & \colhead{(km s$^{-1}$)} & \colhead{($\arcsec$)}\\ } \startdata whole disk & $46.1 \pm 0.2$ & $286.8 \pm 0.9$ & $74.6 \pm 0.5$ & $1.44 \pm 0.02$ \\ NNE Major Axis & $50.6 \pm 0.3$ & $271.4 \pm 1.0$ & $76.5 \pm 0.9$ & $1.28 \pm 0.03$ \\ SSW Major Axis & $50.1 \pm 0.3$ & $271.4 \pm 1.1$ & $68.6 \pm 0.5$ & $1.84 \pm 0.03$ \\ \enddata \end{deluxetable*} On the other hand, the velocity-residual field exhibits a curious asymmetry on and near the minor axis. In Figure\,\ref{vel.resid} , the residual velocity image (observed minus model) is displayed in greyscale and contours, and the locations of the kinematic center, apex $a$, and apex $b$ are denoted by the plus sign, the five-pointed star, and the small triangle symbols, resp. Lines are drawn in this figure to divide the galaxy into four quadrants: these are centered on the receding major axis in the SSW, the near-side (relative to us) minor axis in the WNW, the approaching major axis in the NNE, and the far-side minor axis in the ESE. On the near-side minor axis, the residual velocities are receding from us, whereas on the far-side minor axis, the residual velocities are approaching us. The minor axis is where radial or $z$-motions (perpendicular to the disk) would be most apparent. The observed asymmetry would be consistent with radial inflow or with $z$-motions. The asymmetry region overlaps part of the ring-like distribution of $\lambda 6$ cm\ emission, so if the motions producing the asymmetry were radial, this would argue against an expanding pseudo-ring because it would have to be a contracting pseudo-ring instead. We conclude below that the more likely explanation is $z$-motions. The residual velocities have somewhat greater magnitude on the WNW minor axis (25 to 62 km s$^{-1}$) than on the ESE minor axis ($-19$ to $-27$ km s$^{-1}$). At the location of the {\it triangle}, the residual velocities are small $\simeq 4$ km s$^{-1}$. With the values of the kinematic center and systemic velocity fixed as above, we used GAL to fit separate Brandt rotation curves to the receding and approaching major-axis quadrants. The results are listed in Table\,\ref{galfit} and displayed in Figure\,\ref{rot.curve}. Brandt curves are good fits to the major axis quadrants, and for values of the face-on radius $R \geq 30''$, they yield velocity differences of less than 5\% between the approaching and the receding sides. The solution for the major axis quadrants gives a value for the inclination $i = 50.3\degr \pm 0.3\degr$, (a little smaller than the isophotal value $i = 55\degr$ from the {\it 2-MASS} isophotes and the $N$(HI) outer isophotes) and a maximum velocity of the rotation curve $v_{\rm max} = 271 \pm 1$ km s$^{-1}$. This maximum occurs at $R_{\rm max}$ = $68.6''$ = 18 kpc on the receding side and $76.5''$ = 20 kpc on the approaching side. To provide more information about noncircular motions, Figure\,\ref{profiles.north} displays \ion{H}{1}\ line profiles, spaced $15''$ apart (i.e, every third spatial pixel) from Cube 1 for the entire northern half of NGC 3145. Figure\,\ref{profiles.south} does the same for the entire southern half of the galaxy. At distances $\gtrsim 35''$ from the nucleus, the line profiles on and near the approaching major axis (NNE side) consist of a large amplitude, narrow peak with an asymmetric, extended, high velocity wing, and the line profiles on and near the receding major axis (SSW) side consist of a large amplitude, narrow peak with an asymmetric, extended, low velocity wing. Since observed motions on a major axis cannot contain a contribution from radial inflow or radial outflow, the possible interpretations of the skewed wings in these profiles are out-of-plane gas in a thick disk with a slower rotation speed than the thin disk or $z$-motions of gas moving away from us (relative to the disk) on the approaching major axis and towards us (relative to the disk) on the receding major axis. As we find kinematic oddities on the minor axis as well as on the major axis, the simplest explanation for both is $z$-motions. Given the optical oddities that suggest out-of-plane arms in this galaxy, it is not surprising to find motions perpendicular to the disk. Taken together with the asymmetry in the residual velocities on and near the minor axis (see Figure\,\ref{vel.resid}), it seems that relative to the disk, there is gas with $z$-motions away from us on and near the NNE major axis and near the WNW minor axis and $z$-motions towards us on and near the SSW major axis and near the ESE minor axis. This anti-symmetry of the $z$-motions suggests the disk is undergoing warping as a result of a close passage by a companion. We detect these warping motions out to a radius of at least $105''$. \begin{figure*} \epsscale{1.02} \plotone{fig16.eps} \caption{\ion{H}{1}\ line-profiles spaced $15''$ apart from Cube 1 (with $\sigma_{\rm rms}$ = 0.50 mJy\thinspace beam$^{-1}$\ and 21.2 km s$^{-1}$\ resolution) for the entire northern half of NGC 3145. The line-profiles on and near the major axis consist of a large amplitude peak with a skewed wing towards higher velocities. \label{profiles.north}} \end{figure*} \begin{figure*} \epsscale{0.88} \plotone{fig17.eps} \caption{\ion{H}{1}\ line-profiles spaced $15''$ apart from Cube 1 for the entire southern half of NGC 3145. The rms noise = 0.50 mJy\thinspace beam$^{-1}$. The line-profiles on and near the major axis consist of a large amplitude peak with a skewed wing towards lower velocities. \label{profiles.south}} \end{figure*} For the line-profiles near the major axis that consist of just a major peak plus a skewed tail, we fit the sum of two Gaussians to each profile in the two boxes marked in Figure\,\ref{vel.resid}. These $35'' \times 35''$ boxes are centered approximately at $R_{\rm max}$ on the major axis, and we shall refer to them as the $R_{\rm max}$ boxes We assume that the large-amplitude peak in the profile is from gas in the disk and that the skewed tail is from gas moving perpendicular to the disk. The goals are to determine some information about the \ion{H}{1}\ gas involved in the $z$-motions and to determine how much the rotation curve derived from the \ion{H}{1}\ velocity field at these positions is affected by the $z$-motions. Let the subscript 1 refer to the parameters from the Gaussian fit at the large-amplitude peak in each line profile and subscript 2 refer to the parameters from the Gaussian fit to the skewed tail. Table\,\ref{Rmax.box} lists the mean values from the two-Gaussian fits to the following parameters for the $R_{\rm max}$ boxes: (a) the difference $v_1 - v_2$ in the line-of-sight central velocities of the Gaussians, (b) the difference $v - v_1$ between the velocity $v$ of the velocity field in the first moment image and the central velocity $v_1$ of the large-amplitude peak, (c) the FWHM widths of the Gaussians, and (d) the ratio $N({\rm HI})_2/N({\rm HI})_1$ of the column densities. Locations with bad solutions or large uncertainties are omitted. The uncertainty attached to each mean is the mean of the Gaussian uncertainties. Also listed are the values of the standard deviation $s$ of the sample. \begin{figure*} \epsscale{1.0} \plotone{fig18.eps} \caption {Display of the Fisher-Pearson coefficient of skewness of the line-profiles in NGC 3145. Regions in green have negligible skewness of the line-profiles. The rest have moderate to high skewness. {\it Most of the half of the galaxy from PA = $60\degr$ to PA = $240\degr$ has negative skewness of the line profiles, i.e., gas moving towards us relative to the disk. The other half of the galaxy has large areas with positive skewness, i.e., gas moving away from us relative to the disk.} \label{skewness}} \end{figure*} \begin{deluxetable*}{lccccc} \tabletypesize{\scriptsize} \tablecaption{Two-Gaussian Fits to Line-Profiles in $R_{\rm max}$ Boxes\label{Rmax.box}} \tablewidth{0pt} \tablehead{ \colhead{Parameter} & \colhead{Receding Side} &&& \colhead{Approaching Side}&\\ & \colhead{mean} & \colhead{$s$\tablenotemark{a}} && \colhead{mean} & \colhead{$s$}\\ } \startdata $v - v_1$ (km s$^{-1}$) & $-15\pm 1$ & $\pm 4$ && $12 \pm 1$ & $\pm 6$\\ $v_1 - v_2$ (km s$^{-1}$) & $47 \pm 7$ & $\pm 11$ && $-37 \pm 3$ & $\pm 7$\\ $(FWHM)_1$ (km s$^{-1}$) & $41 \pm 5$ & $\pm 6$ && $31 \pm 4$ & $\pm 3$\\ $(FWHM)_2$ (km s$^{-1}$) & $67 \pm 12$ & $\pm 10 $ && $50 \pm 9$ & $\pm 10$\\ $N({\rm HI})_2/N({\rm HI})_1$ & $0.4 \pm 0.1$ & $\pm 0.2$ && $0.9 \pm 0.2$ & $\pm 1.0$\\ \enddata \tablenotetext{a} {$s$ is the standard deviation of the sample} \end{deluxetable*} On the receding major-axis, the mean value of $v - v_1$ for the $R_{\rm max}$ box is $-15 \pm 1$ km s$^{-1}$\ with $s$ = 4 km s$^{-1}$, and for positions closest to the major axis the mean is $-11$ km s$^{-1}$\ with $s$ = 2 km s$^{-1}$. On the approaching major axis, the mean value of $v - v_1$ for the $R_{\rm max}$ box is $12 \pm 1$ km s$^{-1}$\ with $s$ = 6 km s$^{-1}$, and for positions closest to the major axis the mean is 8 km s$^{-1}$\ with $s = 3$ km s$^{-1}$. Therefore the correction for this effect on $v_{\rm max} \sin i $ amounts to only about 5\%, which is negligible. For a spherically symmetric mass distribution, the dynamical mass $M_{\rm dyn}(R)$ within face-on radius $R$ is $M_{\rm dyn}(R)$ = $R (v_{\rm rot})^2/G$ = $2.33 \times 10^5 R (v_{\rm rot})^2$ $M_{\sun}$\ for $R$ in kpc and circular velocity $v_{\rm rot}$ in km s$^{-1}$. We fit the velocity data in the major axis quadrants out to $R$ =$127''$ = 33.3 kpc, which is (approximately) the extent of significant major-axis emission in the $N$(HI) image. At this radius, the fitted curves in Figure\,\ref{rot.curve} have an average $v_{\rm rot}$ = $252 \pm 6$ km s$^{-1}$, which gives a dynamical mass $M_{\rm dyn}$ = $5.0 \times 10^{11}$ $M_{\sun}$. Since $M$(HI) = $1.4 \times 10^{10}$ $M_{\sun}$, \ion{H}{1}\ accounts for only 3\% of the dynamical mass out to this radius. The radius of $127''$ that we use for $M_{\rm dyn}$ is greater than the Holmberg radius of $103''$ used by \citet{faber79} for $M_{\rm dyn}$. At R = $103''$ our data have an average $v_{\rm rot}$ of $263 \pm 3$ km s$^{-1}$, which is only 5\% greater than the value of 250 km s$^{-1}$\ used by \citet{faber79}. Thus aside from the difference in the adopted distance of NGC 3145, our value for $M_{\rm dyn}$ out to the Holmberg radius is consistent with theirs. The magnitude of the difference $|v_1 -v_2|$ in the line-of-sight central velocities in the $R_{\rm max}$ box on the receding side $47 \pm 7$ km s$^{-1}$\ is marginally consistent, within the uncertainties, with its magnitude $37 \pm 3$ km s$^{-1}$\ in the $R_{\rm max}$ box on the approaching side. This is compatible with the suggestion that the skewed tail in these line profiles was produced by the same mechanism on both sides of the galaxy. On both sides, the FWHM width of the Gaussian fit to the line profile is about 20 km s$^{-1}$\ broader for the skewed tail than for large amplitude peak. The larger velocity width for the skewed tail could result from a velocity gradient along the flow . For the ratio $N({\rm HI})_2/N({\rm HI})_1$ of the column densities from the Gaussian fits, we restrict to positions where the propagated uncertainty in this ratio is less than 50\%; this eliminates about half of the positions in the above boxes. Then for the $R_{\rm max}$ box on the receding side, the mean value of $N({\rm HI)}_2/N({\rm HI})_1$ = $0.4 \pm 0.1$ with standard deviation $s$ of the sample = 0.2 and range = 0.2 to 0.9. For the $R_{\rm max}$ box on the approaching side, the mean value of this ratio = $0.9 \pm 0.2$ with $s$ = 1.0 and range = 0.20 to 3.5. We conclude that the amount of gas involved in the $z$-motions is not insignificant but varies considerably with position. Another way of analyzing the velocities in NGC 3145 is to look at the image of the skewness of the line-profiles in Figure\,\ref{skewness}. The parameter displayed is the dimensionless Fisher-Pearson coefficient of skewness \begin{displaymath} g_1 = \frac{m_3}{(\sigma_{\rm v})^3} \end{displaymath} where $m_3$ is the third moment of the velocity data. For a sample of size $n$, $g_1$ needs to be multiplied by a correction factor \begin{displaymath} \frac{[n(n-1)]^{1/2}}{n-2}. \end{displaymath} For Cube 1, this correction factor varies from 1.06 near the center to 1.2 in the outskirts of NGC 3145. In Figure\,\ref{skewness}, regions in green have negligible skewness of the line-profiles, i.e., $|g_1| < 0.5$, and the rest of the galaxy has moderate to high skewness of the line-profiles. This image demonstrates that (1) most of the half of the galaxy from position angle PA = $60\degr$ to PA = $240\degr$ has negative skewness of the line-profiles, i.e., gas moving towards us relative to the disk. (2) In the other half of the galaxy there are large areas of positive skewness, i.e., gas moving away from us relative to the disk. \subsubsection{Small Scale Anomalies} We study the various oddities marked in Figure\,\ref{B.Features} by inspecting their \ion{H}{1}\ line-profiles. For NGC 3145, we find the line-profiles to be more revealing than position-velocity diagrams are. \begin{deluxetable}{lcc} \tabletypesize{\scriptsize} \tablecaption{Corrected Velocity Dispersions\label{corr.disp}} \tablewidth{0pt} \tablehead{ \colhead{Location} & \colhead{$\sigma_{\rm v}$} & \colhead{ Corrected $\sigma_{\rm v}$}\\ & \colhead{(km s$^{-1}$)} & \colhead{(km s$^{-1}$)} } \startdata apex $a$ & 94 & 68\\ opp. apex $a$ & 60 & 40\\ \\ apex $b$ & 50 & 36\\ opp. apex $b$ & 41 & 38\\ \\ apex $c$ & 50 & 36\\ opp. apex $c$ & 57 & 50\\ \\ {\it triangle} (mean) & 72 & 49\\ opp. {\it triangle} (mean) & 50 & 37\\ \\ Feature $f$ &111 & 83\\ opp. Feature $f$ & 98 & 85\\ \\ complex dust-loops (mean) & 93 & 79\\ Feature $d$ (mean) & 71 & 59\\ western antenna (mean) & 54 & 46\\ \enddata \end{deluxetable} For features of interest, Table\,\ref{corr.disp} lists values of $\sigma_{\rm v}$\ of the \ion{H}{1}\ gas before and after correction for the mean velocity gradient across the PSF. These are from Cube 1, the cube with 21 km s$^{-1}$\ velocity resolution. The correction for these features decreases the velocity dispersion by 15\% to 32\%. For comparison, this Table also contains the values of $\sigma_{\rm v}$\ at the diametrically-opposite locations of some of the features. These are listed as ``opp. apex $a$,'' ``opp. apex $b$,'' etc. Since the line-profiles are not corrected for the velocity gradient, one does well to keep in mind the size of the correction to $\sigma_{\rm v}$\ when viewing the line-profiles since a large velocity gradient plus finite spatial resolution artificially broaden a line-profile. Our method of correcting the velocity dispersion is less accurate in regions, such as near the galaxy center, where the velocity gradient changes significantly across the PSF. This inaccuracy contributes to the very high values of the corrected $\sigma_{\rm v}$\ in part of the region of complex dust loops and at Feature $f$ (see velocity contours in Figure\,\ref{N(HI).N3145}). All of the corrected values of $\sigma_{\rm v}$\ for the features in Table\,\ref{corr.disp} (36 km s$^{-1}$\ to 68 km s$^{-1}$, if we exclude Feature $f$ and the region of complex dust loops) are large compared to the values of 6 - 13 km s$^{-1}$\ found by \citet{kamphuis93} for the \ion{H}{1}\ gas in undisturbed spirals, but are comparable to the high values of $\sigma_{\rm v}$\ measured in some galaxy pairs undergoing close encounters \citep{elmegreen95, kaufman97, kaufman99}. \begin{figure*} \epsscale{0.874} \plotone{fig19.eps} \caption{\ion{H}{1}\ line-profiles of the main spiral arm and the {\it branch} on the northeastern side of NGC 3145 from Cube 1, spaced $5''$ apart. The rms noise is 0.5 mJy\thinspace beam$^{-1}$. The panel labelled ``d'' is where the {\it branch} appears to intersect the main spiral arm. The {\it branch} is the peak at 3623 km s$^{-1}$, and the main spiral arm is the peak at 3475 km s$^{-1}$\ in the line-profile. The $\sim 150$ km s$^{-1}$\ difference in velocity between the {\it branch} and the main spiral arm where in projection they appear to intersect implies that {\it the branch is a tidal arm moving away from us.} \label{branch}} \end{figure*} The clearest situation is Feature $d$ in Figure\,\ref{B.Features} . Figure\,\ref{branch} displays \ion{H}{1}\ line-profiles, spaced $5''$ apart, for Feature $d$ and its surroundings. The panel labelled ``d" is where the branch and the main spiral arm appear to intersect, and the panel just below it is where the {\it branch} departs from the spiral arm. The four panels in the upper right corner of Figure\,\ref{branch} (i.e., the intersection of rows 4 and 5, with columns 4 and 5, where we number rows and columns from the bottom left corner of the figure) are located along the main spiral arm. The greatest amplitude peak in their line-profiles is at 3454 km s$^{-1}$. The six panels in the bottom left corner of the figure (the intersection of rows 1, 2, and 3 with columns 1 and 2) are located along the {\it branch}. The greatest amplitude peak in their line-profiles is at $\sim 3620$ km s$^{-1}$. The line-profiles at the panel where the {\it branch} and the main arm appear to intersect and at the panel where the {\it branch} departs from the main arm have two main peaks: one at 3623 km s$^{-1}$\ (i.e., the {\it branch}) and the other at 3475 km s$^{-1}$ (i.e., the main spiral arm). Although optically feature $d$ looks like a branch of the spiral arm, the $\sim 150$ km s$^{-1}$\ difference in line-of-sight velocity between the {\it branch} and the main spiral arm where in projection they appear to intersect suggests that the {\it branch} is an out-of-plane feature moving away from us relative to the disk, and there may be large streaming motions along it. Thus {\it the branch is a tidal arm.} To produce the observed structure, the perturbation needed an azimuthal component to draw out the tidal arm and a perpendicular component to make it extra-planar. The apparently major \ion{H}{1}\ concentration at this location cannot be a single entity; instead one part is associated with the main spiral arm and another part with the {\it branch}. As noted in Section 4.1 (see Figure\,\ref{N(HI).N3145}), the brighter \ion{H}{1}\ here is elongated to the south, not along the main spiral arm but along the {\it branch} or a bit east of it, and the velocity field contours have wiggles along the {\it branch}. \begin{figure} \epsscale{1.17} \plotone{fig21bot.eps} \plotone {fig21.eps} \caption{Bottom: \ion{H}{1}\ line-profiles from the cube with 10.6 km s$^{-1}$\ resolution, spaced $5''$ apart, for the {\it triangle}. The rms noise is 0.74 mJy\thinspace beam$^{-1}$. The panels at apices $a$, $b$, and $c$ of the {\it triangle} are labelled ``a,'' ``b,'' and ``c,'' resp. Top: \ion{H}{1}\ line-profiles for the diametrically-opposite region. The line-profiles labelled ``opp. a", ``opp. b'', and ``opp. c'' are diametrically-opposite apices $a$, $b$, and $c$, resp. Only apex $a$ differs significantly in shape and width of the line-profile from that of its diametrically-opposite counterpart. \label{profiles.10kms}} \end{figure} \citet{struck90} comments that the optical {\it triangle} on the southern side of NGC 3145 looks like a swallowtail caustic. A swallowtail caustic is composed of five ``intersecting'' star streams. In his study of two-dimensional caustics with collisionless particles, he finds that such features may result from a slightly off-center direct collision with a smaller galaxy. Our \ion{H}{1}\ data do not have sufficient resolution to see individual gas streams in the {\it triangle}, and we note that once gas is included, a three-dimensional model would be needed. Instead, as a test we compare the \ion{H}{1}\ line-profiles in the {\it triangle} with those in the diametrically-opposite region on the northern side of NGC 3145. Also for the features of interest, we compare the values of the corrected velocity dispersion in Table\,\ref{corr.disp} with those at the diametrically-opposite locations. In Figure\,\ref{profiles.10kms}, the bottom panel displays \ion{H}{1}\ line-profiles, spaced $5''$ apart, for the {\it triangle} from Cube 3, which has a velocity resolution of 10.6 km s$^{-1}$\ and a spatial resolution of $27.3'' \times 16.6''$ (HPBW), BPA = $- 29\degr$. The top panel displays the diametrically-opposite region, with line-profiles labelled ``opp. a,''``opp b,'' and ``opp. c.'' Only apex $a$ shows a significant difference when comparing its line-profile and corrected $\sigma_{\rm v}$\ with those of the diametrically-opposite location. Apex $a$ is where the inner spiral arm and Sandage's peculiar arm intersect in projection. Thus, our data cannot confirm the swallowtail interpretation of the apparent {\it triangle}. If a slightly off-center direct collision were responsible for the anomalous features in NGC 3145, then to produce the observed warping motions over large spatial scales found in Section 4.2.1, the orbital tilt angle would have to be significantly less than $90\degr$ relative to the disk of NGC 3145. Our failure to find a strong expanding ring is consistent with an encounter that is not close to perpendicular. The line-profile at apex $b$ (see Figure\,\ref{profiles.10kms}) consists of a large amplitude, narrow peak plus an asymmetric, extended wing to lower velocities. Fitting the sum of two Gaussians to the line-profile at apex $b$ in Cube 1 gives (a) for the large amplitude peak, a central velocity $v_1$ = $3872 \pm 1$ km s$^{-1}$\ and $(FWHM)_1$= $42 \pm 3$ km s$^{-1}$, and (b) for the skewed wing, a central velocity $v_2$ = $3818 \pm 11$ km s$^{-1}$\ and $(FWHM)_2$ = $100 \pm 17$ km s$^{-1}$. Thus $v_1 - v_2$ =$54 \pm 11$ km s$^{-1}$. These values for the line-of-sight velocity difference $v_1 - v_2$ and $(FWHM)_1$ are similar to the mean values listed in Table\,\ref{Rmax.box} for the warping motions in the receding-side $R_{\rm max}$ box (which is southwest of apex $b$ and the western antenna). The value of $(FWHM)_2$ for apex $b$ is somewhat greater than the mean for this $R_{\rm max}$ box. Unlike apex $b$, the $R_{\rm max}$ box does not contain an ``X''-feature (where two arms appear to cross in projection). We note that two arms having the same line-of-sight velocity where they produce an ``X''-feature could, nevertheless, be in different planes. \begin{figure} \epsscale{1.1} \plotone{fig23.eps} \caption{\ion{H}{1}\ line profiles of the western-antenna continuation of Sandage's peculiar arm. These are from the cube with 10.6 km s$^{-1}$\ resolution, spaced $5''$ apart. The rms noise is 0.74 mJy\thinspace beam$^{-1}$. All but one of these line-profiles have two clearly distinct peaks separated by $56 \pm 8$ km s$^{-1}$. We attribute one of these peaks to Sandage's peculiar arm and the other to the disk. \label{west.antenna}} \end{figure} The western antenna is a continuation of Sandage's peculiar arm. Figure\,\ref{west.antenna} displays western antenna line-profiles of feature $e$ in Figure\,\ref{B.Features}. These are for the smaller of the two boxes drawn on the {\it HST} image in that Figure. The line-profiles are from Cube 3 (the cube with 10.6 km s$^{-1}$\ resolution), spaced $5''$ apart and do not resemble those at the diametrically-opposite location. Eight of the nine line-profiles in this figure exhibit two main, clearly distinct peaks of comparable amplitude. The mean separation in velocity between the two peaks is 56 km s$^{-1}$\ with a standard deviation $s$ of the sample = 8 km s$^{-1}$. This is essentially the same as the value of $v_1 - v_2$ = $54 \pm 11$ km s$^{-1}$\ at apex $b$. We attribute one of these peaks to Sandage's peculiar arm and the other to the disk. Like the {\it branch} on the northeastern side of NGC 3145, Sandage's peculiar arm appears to be a tidal arm, but differs by only $56 $ km s$^{-1}$\ in line-of-sight velocity from that of the disk, whereas the {\it branch} differs by $\sim 150$ km s$^{-1}$\ from that of the disk. In Section 3.2, we found that the portion of Sandage's peculiar arm from Feature $f$ northwards appears to be in the disk but once this arm reaches the eastern side of the {\it triangle}, it is no longer in the disk. If Sandage's peculiar arm is a tidal arm coming from the central part of the galaxy, it is not surprising to find a shock front along its initial part until it emerges from the disk. In summary, we find two types of extra-planar motions in NGC 3145: (i) an anti-symmetric, global warping of the disk and (ii) extra-planar tidal arms. Feature $d$ is quite distinct from the warping motions on the northern side of the galaxy in terms of the size of the line-of-sight velocity difference between components of the line-profile (150 km s$^{-1}$\ versus 37 km s$^{-1}$) and the shape of the line-profile. In contrast to this, the line-of-sight velocity difference between Sandage's peculiar arm and the disk is similar in size to that of the warping motions. For these two extra-planar arms, the observed difference in line-of-sight velocity between the arm and the disk could be a combination of $z$-motions and streaming along the arm. We do not have sufficient information to determine the relative contributions of these two types of motions. \section{\ion{H}{1}\ Properties of NGC 3143 and PGC 029578} Basic \ion{H}{1}\ properties of the two companions, NGC 3143 and PGC 029578, are listed in Table\,\ref{HIproperties}. Both companions are on the receding side of NGC 3145 (see Figure\,\ref{N(HI).N3145}) and have values of $v_{\rm sys}$ less than that of NGC 3145. There are no other galaxies near NGC 3145 that have published optical redshifts close to that of NGC 3145 or have \ion{H}{1}\ detections. The SB(s)b galaxy NGC 3143 at $8.97\arcmin$ (= 141 kpc) south of NGC 3145 is the nearer to it of the two companions and has the smaller diameter. It is bluer in $B - V$ than NGC 3145 (see Table\,\ref{basic.optical}), and optically its spiral arms are brighter than the spiral arms of NGC 3145. In the top panel of Figure\,\ref{N3143.HI}, $N$(HI) contours of NGC 3143 are overlaid on the $B$ image from the Burrell-Schmidt telescope. The \ion{H}{1}\ column densities in NGC 3143 are significantly lower than in NGC 3145 or in PGC 029578 (see Table\,\ref{HIproperties} and Figure\,\ref{HI.triplet}). In $K_s$-band, the luminosity of NGC 3143 is 0.073 times the luminosity of NGC 3145. We adopt this as the stellar mass ratio of NGC 3143 to NGC 3145. If we assume that the total mass ratio of NGC 3143 to NGC 3145 is the same as the stellar mass ratio, then since the dynamical mass of NGC 3145 is $M_{\rm dyn}$ =$5.0 \times 10^{11}$ $M_{\sun}$\ out to $1.3 \times$ the optical radius, the estimated mass of NGC 3143 is $\sim 3.7 \times 10^{10}$ $M_{\sun}$. The following comparisons indicate that NGC 3143 is somewhat deficient in \ion{H}{1}. The \ion{H}{1}\ mass $M$(HI) of NGC 3143 is $6.5 \times 10^8$ $M_{\sun}$, which is only 4.5\% of the \ion{H}{1}\ mass of NGC 3145. Thus the ratio of $M$(HI) to stellar mass is a factor of 1.6 greater in NGC 3145 than in NGC 3143. In NGC 3143, the ratio $M$(HI)/$L_{\rm B}$ = 0.085 $M_{\sun}$ $L_{\sun}^{-1}$. This value lies in the bottom 25\% of Sab, Sb spirals in \citet{roberts94} and is half of its value in NGC 3145. This low value of $M$(HI)/$L_{\rm B}$ could be the result of more active star formation in NGC 3143 as indicated by its bluer $B - V$ color or of loss of some \ion{H}{1}\ by NGC 3143 in an encounter. The amount of molecular gas in these galaxies has not been measured. \begin{figure*} \epsscale{0.7} \plotone{fig24a.eps} \plotone{fig24b.eps} \plotone{fig24c.eps} \caption{Top: $N$(HI) contours of NGC 3143 overlaid on a $B$ image in greyscale. Contour levels are at 100, 200, and 300 Jy\thinspace beam$^{-1}$ m\thinspace s$^{-1}$, where 100 Jy\thinspace beam$^{-1}$ m\thinspace s$^{-1}$\ corresponds to $N$(HI) = $2.3 \times 10^{20}$ atoms\thinspace cm$^{-2}$. NGC 3143 is somewhat deficient in \ion{H}{1}, and its \ion{H}{1}\ emission is more elongated on the northeastern major axis than on the southwestern major axis. Middle: \ion{H}{1}\ channel maps of NGC 3143 with contours at (3, 4, 5, 6) $\times$ the rms noise of 0.74 mJy\thinspace beam$^{-1}$ (equivalent to 1.0 K). Location of the nucleus is marked by a cross. The western side of NGC 3143 is the receding side. Bottom: Contours from the first moment map overlaid on the $N$(HI) image in greyscale. \label{N3143.HI}} \end{figure*} \begin{figure*} \epsscale{0.75} \plotone{fig25a.eps} \plotone{fig25b.eps} \caption{Images of PGC 029578. Top: $N$(HI) contours overlaid on a DSS image in greyscale. The contour levels are 100, 200, 400, 500, 600, 700, 800 Jy\thinspace beam$^{-1}$ m\thinspace s$^{-1}$, where 100 Jy\thinspace beam$^{-1}$ m\thinspace s$^{-1}$\ corresponds to $N$(HI) = $2.3 \times 10^{20}$ atoms\thinspace cm$^{-2}$. Bottom: Contours from the first moment image overlaid on the $N$(HI) image in greyscale, with a cross symbol marking the location of the nucleus. \label{PGC.HI}} \end{figure*} We estimate the star formation rate (SFR) of NGC 3143 from the magnitudes listed by {\it WISE}. At 22 \micron\ NGC 3143 has a magnitude $m$(22 \micron) = $6.173 \pm 0.055$ mag. Since it is 4 mag brighter at 12 \micron\ than at 4.6 \micron, NGC 3143 is a red source in the mid-infrared. To convert $m(22\ \micron)$ to flux density $S_\nu$(22 \micron), we use the zero-magnitude flux-density, the correction for red sources, and other information from the online {\it WISE} All-Sky Release Explanatory Supplement. This gives $S_\nu(22\ \micron)$= 26 mJy for NGC 3143. We extrapolate to 24 \micron\ by taking $S_\nu \propto \nu^{-2}$. Then with a distance $D$ of 54.8 Mpc and $L(24\ \micron)$ = $\nu L_\nu$, the luminosity $L(24\ \micron)$ of NGC 3143 is $1.4 \times 10^{42}$ erg s$^{-1}$. \citet{calzetti07} find the following relation between SFR in $M_{\sun}$\ yr$^{-1}$ and $L(24\ \micron)$ in erg s$^{-1}$, \begin{displaymath} SFR = 1.27 \times 10^{-24}[L(24\ \micron)]^{0.885}. \end{displaymath} Applying this to NGC 3143 gives SFR = 0.25 $M_{\sun}$\ yr$^{-1}$. If the molecular gas depletion time of 2.2 Gyr from \citet{leroy12} applies to NGC 3143 as a whole, then a SFR of 0.25 $M_{\sun}$\ yr$^{-1}$ would require $M$(H$_2$) = $5 \times 10^8$ $M_{\sun}$\, about the same as its measured \ion{H}{1}\ mass of $6.5 \times 10^8$ $M_{\sun}$. For a small galaxy it is unusual to have such a large fraction of the interstellar gas in molecular form. {\it 2MASS} $K_s$ isophotes find that the isophotal major axis of NGC 3143 is at a position angle of $225\degr$. Figure\,\ref{N3143.HI} reveals that \ion{H}{1}\ emission extends about 50\% farther on the northeastern major axis than on the southwestern major axis. At the lowest contour level [$N$(HI) = $2.3 \times 10^{20}$ atoms\thinspace cm$^{-2}$] in Figures\,\ref{HI.triplet} and \,\ref{N3143.HI}, the semimajor axis of NGC 3143 is $31.5''$ on the northeastern side and $20.5''$ on the southwestern side, as if an interaction either truncated the \ion{H}{1}\ disk on its southwestern side or elongated it on its northeastern side. At this contour level, the \ion{H}{1}\ diameter of the major axis is $52'' \pm 2''$ for NGC 3143, $262'' \pm 2''$ for NGC 3145, and $122 \pm 2''$ for PGC 029578. The ratio of this diameter to $D_{25}$ is $1.00 \pm 0.17$ for NGC 3143, $1.35 \pm 0.09$ for NGC 3145, and $1.20 \pm 0.20$ for PGC 029578. The middle panel of Figure\,\ref{N3143.HI} displays all the channel maps of NGC 3143 with emission greater than 3 $\times$ the rms noise in Cube 3, the cube with 10.6 km s$^{-1}$\ velocity resolution. The location of the nucleus is marked by a cross. These channel maps indicate that the receding side is on the western side and the approaching side is on the eastern side. In the bottom panel of Figure\,\ref{N3143.HI}, contours from the first moment image of NGC 3143 from Cube 3 are overlaid on the $N$(HI) image in greyscale. Since the major axis diameter is only about 3 times the HPBW of the \ion{H}{1}\ synthesized beam, we don't get much of a velocity field. Some estimates of the inclination $i$ from the axis ratio are $i$ = $35\degr$ from 2MASS $K_s $ isophotes and $i$ = $14\degr$ from the DSS image \citep{zaritsky97}. The value of the axis ratio from the $N$(HI) isophotes depends on whether we use the northeastern or the southwestern semimajor axis; with the northeastern semimajor axis we obtain $i = 35\degr$, whereas with the southwestern semimajor axis we get $i \leq 25\degr$. At the center of NGC 3143, the mean \ion{H}{1}\ velocity = $3530 \pm 5$ km s$^{-1}$, which is consistent with the optical value of $v_{\rm sys}$ =3536 km s$^{-1}$\ from \citet{schweizer87} and marginally consistent with the optical value $3506 \pm 20$ km s$^{-1}$\ at the nucleus measured by \citet{zaritsky97}. Our \ion{H}{1}\ value of $v_{\rm sys}$ for NGC 3143 is $125 \pm 5$ km s$^{-1}$\ below the value of $v_{\rm sys}$ = $3655.9 \pm 0.2$ km s$^{-1}$\ for NGC 3145. Therefore an encounter between these two galaxies could have had a significant component perpendicular to the disk of NGC 3145. The value of $v_{sys}$ = $3652 \pm 5$ km s$^{-1}$\ listed on NED for NGC 3143 from \citet{paturel03} suffers from confusion with NGC 3145. The other companion, the Sdm galaxy PGC 029578, is $12.97\arcmin$ (= 204 kpc) south of NGC 3145. The \ion{H}{1}\ mass of PGC 029578, $3.3 \times 10^9$ $M_{\sun}$, is 23\% of $M$(HI) of NGC 3145 and five times the \ion{H}{1}\ mass of NGC 3143. \begin{figure} \epsscale{0.75} \plotone{fig26.eps} \caption{Position-velocity diagram of PGC 009578 along its kinematic major axis as a display of its rotation curve. Contours at (2, 4, 6, 8, 10) $\times$ rms noise of 0.74 mJy\thinspace beam$^{-1}$, equivalent to 1.0 K. \label{PGC.rotcurve}} \end{figure} The top panel in Figure\,\ref{PGC.HI} displays $N$(HI) contours of this galaxy overlaid on the DSS image in greyscale, and the bottom panel displays the velocity field contours from Cube 1 overlaid on the $N$(HI) image in greyscale. At the center of PGC 029578 the mean \ion{H}{1}\ velocity is $3513 \pm 5$ km s$^{-1}$, which differs considerably from the long-slit optical value of $3586 \pm 30$ km s$^{-1}$\ for the nucleus obtained by \citet{zaritsky97}. We wonder if the latter is a misprint. Our \ion{H}{1}\ value of $v_{\rm sys}$ for PGC 029578 is $143 \pm 5$ km s$^{-1}$\ below that of NGC 3145. Thus an encounter between PGC 029578 and NGC 3145 could also have had a significant component perpendicular to the disk of NGC 3145. Figure\,\ref{PGC.rotcurve} displays the position-velocity (P-V) diagram of PGC 029578 along its kinematic major axis. Out to a radius $R$ of $50''$ (i.e., approximately the optical semi-major axis $D_{25}/2$), the position angle of the kinematic major axis = $270\degr$ on the receding side, and the rotation curve is symmetric. Beyond $R$ =$50''$, (i) the kinematic major axis on the receding side appears to align better with the isophotal major axis, which has a PA = $283\degr$, and (ii) the velocity field on the approaching side becomes irregular. Estimates of $i$ are $68\degr$ from the optical axis ratio (see Table\,\ref{basic.optical}) and $61\degr \pm 2\degr$ from the \ion{H}{1}\ axis ratio. At $R$ =$50''$ = 13 kpc, $(v - v_{\rm sys}) \sin i$ = 76 km s$^{-1}$. Taking $i$ = $64\degr \pm 4\degr$ gives $v_{\rm rot} = 85 \pm 3$ km s$^{-1}$. Then for a spherically symmetric mass distribution, the dynamical mass $M_{\rm dyn}$ = $2.2 \times 10^{10}$ $M_{\sun}$. Out to the same radius ($R =50''$), the \ion{H}{1}\ mass of PGC 029578 is $2.9 \times 10^9$ $M_{\sun}$, so $M$(HI)/$M_{\rm dyn}$ = 0.13. From the dynamical masses, the ratio of the mass of PGC 029578 to that of NGC 3145 is 0.044. From the $K_s$ luminosities, the ratio of the stellar mass of NGC 3143 to that of NGC 3145 is 0.073. Thus the Sdm galaxy PGC 029578 has a lower mass than NGC 3143. PGC 029578 is also less luminous optically than NGC 3143 as it is 1.4 mag fainter in $B$ and 1.6 mag fainter in $R$ than NGC 3143 (see Table\,\ref{basic.optical}). However, as noted above, PGC 029578 has five times as much \ion{H}{1}\ as NGC 3143. This is an indication that PGC 029578 has not experienced an encounter recently since otherwise its \ion{H}{1}\ content would be lower as a result of gas loss and/or enhanced star formation. It is more likely that NGC 3143 was involved in a recent interaction because its SFR seems to be enhanced, its \ion{H}{1}\ emission is 50\% more extended on its northeastern side, it is somewhat deficient in \ion{H}{1}, and it appears to have a large molecular fraction. The simple analytic model in the Appendix demonstrates that an encounter between NGC 3143 and NGC 3145 could have triggered the observed warping mode in NGC 3145. It illustrates some of the considerations necessary for making a detailed numerical model. The latter is beyond the scope of this paper. The observed anti-symmetric warping motions in NGC 3145 (see Section 4.2.1) have line-of-sight magnitude $\sim 40$ km s$^{-1}$ (or $v_z \sim 60$ km s$^{-1}$\ if the observed motions are completely perpendicular to the disk). In the model, the magnitude $\Delta v_z$ of the warping motions and the maximum displacement $\Delta z$ of the material above the plane of the disk depend on how shallow the attack angle of the companion's trajectory is relative to the disk of NGC 3145. For a particular choice of orbital tilt angle, this analytic model creates warping motions in NGC 3145 of magnitude $\sim 200$ km s$^{-1}$, with NGC 3143 pulling up on one side of the disk of NGC 3145 and down on the opposite side of the disk. These would warp the disk by $\sim 5$ kpc. Given the assumptions in the model, these values of $\Delta v_z$ and $\Delta z$ are uncertain by a factor of a few. A slightly steeper angle of attack with the same flyby velocity would give a more modest warp. The comparison in the Appendix between the time elapsed since closest approach and rough estimates of the period of vertical oscillation of the warp suggests that the warp material may have experienced about one vertical oscillation. Thus the extra-planar features produced by this encounter should still be present. \section{Discussion and Conclusions} NGC 3145 is a spiral galaxy with several optical peculiarities. For example on the southern side of NGC 3145, stellar arms cross, forming ``X''-features, and outline an apparent triangular region. Sandage's peculiar arm forms the eastern edge of the {\it triangle}. As this arm heads southwest from the inner disk of NGC 3145, it crosses first a string of blue clumps, then an inner spiral arm at the eastern apex of the {\it triangle}, and then another stellar arm at the southern apex of the {\it triangle}. Together with its two smaller companions, NGC 3145 forms the NGC 3145 triplet. To study the above features and other optical anomalies, we analyzed VLA \ion{H}{1}\ observations of this group and VLA $\lambda 6$ cm\ radio continuum observations of NGC 3145. Our $\lambda 6$ cm\ radio continuum observations yield the following information about NGC 3145 (1) Lack of prominent radio continuum emission at the eastern and southern apices of the {\it triangle} rules out shock fronts at the arm-crossing ``X'''s there. This means that the arms appearing to cross at these locations must be in different planes and that the portion of Sandage's peculiar arm outlining the eastern edge of the {\it triangle} is not in the disk. (2) North of the {\it triangle}, there is extended radio continuum emission along Sandage's peculiar arm. This is indicative of shocked gas. Hence this portion of Sandage's peculiar arm, which includes where it crosses the string of blue clumps, appears to be in the disk. The suggestion is that Sandage's peculiar arm emerges from the disk before it reaches the {\it triangle}. Our \ion{H}{1}\ observations find the following kinematic anomalies in NGC 3145. (1) In large areas of NGC 3145, the line-profiles are skewed. In the middle-to-outer part of the gas disk, line-profiles near the major axis consist of a large amplitude peak plus a broader, skewed wing. Relative to the disk, the gas in the skewed wing has $z$-motions away from us on the approaching side of the galaxy and $z$-motions towards us on the receding side. The difference $v_2 - v_1$ between the central velocity of the skewed wing and that of the large amplitude peak has a mean value $37 \pm 3$ km s$^{-1}$\ on the approaching side and $-47 \pm 7$ km s$^{-1}$\ on the receding side. This kinematic anti-symmetry implies that there has been a perturbation with a sizeable component perpendicular to the disk over large spatial scales, e.g., the disk is undergoing a warping as a result of a close passage by one of its companions. We detect these warping motions out to a radius of at least $105''$ = 28 kpc. (2) There are two features whose velocities imply that they are out-of-plane tidal arms. One is the apparent branch of the main spiral arm on the northeastern side of the galaxy. Where the {\it branch} and the main spiral arm appear to intersect in projection, the {\it branch} has a line-of-sight velocity $\sim 150$ km s$^{-1}$\ greater than that of the main spiral arm. We conclude that the {\it branch} is an out-of-plane tidal arm and may have streaming motions along it. The other is Sandage's peculiar arm as line-profiles of the western-antenna extension of Sandage's peculiar arm exhibit two clearly distinct peaks of comparable amplitude separated by 56 km s$^{-1}$\ in line-of-sight velocity. We attribute one of these to \ion{H}{1}\ in the disk and the other to \ion{H}{1}\ in Sandage's peculiar arm, which thus appears to be another tidal arm. These arms appear relatively short for tidal arms; more sensitive \ion{H}{1}\ observations are necessary to see if these tidal arms extend farther. (3) The distribution of \ion{H}{1}\ emission from NGC 3145 is not axisymmetric: within the main optical disk there are massive \ion{H}{1}\ concentrations NW, NE, and SW of the nucleus and a trough on the SE side. This peculiarity is other evidence of an encounter. Our observations solve the puzzle of Sandage's peculiar arm, i.e., it is a tidal arm tilted with respect to the plane of the disk of NGC 3145, and this tidal arm emerges from the disk before it reaches the apparent arm crossings at the {\it triangle}. Furthermore, our \ion{H}{1}\ observations revealed anti-symmetric warping motions in NGC 3145 on large spatial scales out to 28 kpc. The two relatively-short, extra-planar tidal arms are evidence that NGC 3145 has recently experienced a combination of azimuthal and perpendicular perturbations. The warping motions are evidence that it has recently experienced a perpendicular perturbation affecting the middle-to-outer part of the gas disk. The Sdm galaxy PGC 02978 shows no evidence of having undergone an encounter recently, whereas NGC 3143 has an enhanced SFR, somewhat of an \ion{H}{1}\ deficiency compared to its SFR, \ion{H}{1}\ emission 50\% more extended on its northeastern side than on the opposite side, and an apparently large molecular fraction for a small galaxy. Thus NGC 3143 is the more likely of the two companions to have interacted with NGC 3145. Our simple analytic model demonstrates that an encounter between NGC 3143 and NGC 3145 is a plausible explanation for the observed warping motions in NGC 3145. \acknowledgments We thank the referee for making detailed comments and suggestions that significantly improved this paper. The \ion{H}{1}\ and radio continuum images used here are from our observations in Programs AK 368 and AK 327 at the Very Large Array of Radio Telescopes (VLA). The Hubble Space Telescope image used in this research is based on observations made with the NASA/ESA Hubble Space telescope and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA), and the Canadian Astronomy Data Centre (CADC/NRC/CSA). This publication makes use of data products from the Widefield Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. We also utilized data products from the Two Micron All Sky Survey (2MASS), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/ California Institute of Technology, funded by NASA and the National Science Foundation. We obtained helpful information from the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We thank Paul Eskridge for making observations for us at the Michigan-Dartmouth-M.I.T. Observatory. We thank Philip Appleton for making observations for us at the Fick Observatory of Iowa State University.
{ "timestamp": "2015-07-17T02:09:54", "yymm": "1504", "arxiv_id": "1504.03236", "language": "en", "url": "https://arxiv.org/abs/1504.03236" }
\section{Introduction} \noindent Upon decreasing temperature supercooled liquids display a dramatic increase in their shear viscosity. This effect can be directly related to an increase of the relaxation time of shear stress fluctuations. In his seminal work Goldstein \cite{Goldstein} pointed out the importance of the potential energy landscape over the $N$ particle configuration for the slow relaxation dynamics of supercooled liquids. At a sufficiently low temperature the system is assumed to be close to a local minimum of this landscape (called an inherent structure).\\ This picture naturally implies the presence of two time scales in supercooled liquids: a fast relaxation process associated with vibrational motion of the system around an inherent structure and a slow relaxation process corresponding to thermally activated hopping to another minimum in the potential energy landscape accompanied by a rearrangement of a relatively small number of particles in the system. In a non-equilibrium situation applied strain might induce the disappearance of a local minimum facilitating such a rearrangement and leading to stress relaxation. While this fact has motivated investigations on the influence of strain on the local potential minima \cite{Lacks,AQS,majid,karmakar}, several studies have implicitly made use of residual stresses in inherent structures present even at equilibrium investigating the magnitude \cite{magn} and the relaxation dynamics \cite{ViscNetwork,stressrelax} of the inherent stresses as the system samples different minima. As pointed out by Abraham and Harrowell, the notion of a shear stress of an inherent structure is a priori far from obvious: \cite{Harro} if a system which can be brought arbitrarily close to a local energy minimum (say by an appropriate, numerical energy minimization of a simulation), one would naively expect that all global shear stresses in the system are zeroed by this minimization procedure. However, it has been argued that boundary conditions impose a constraint on the energy minimization i.e.\ the shape and the size of the containment of a supercooled liquid forbids to zero all stresses. In a computer simulation this constraint amounts to a particular choice for the simulation box. It is not clear in which sense this IS shear stress can be considered as a predecessor of the stresses supported by a deformed supercooled liquid/glass or to which extent it determines the stress relaxation process in a non-equilibrium situation at all. Therefore, a better understanding of the IS stress is highly desirable. A first discussion of it has also been given in [\onlinecite{Harro}]. Among other things, the authors found that the magnitude of the IS stress is surprisingly, essentially independent of temperature, scales with a certain power of the system density and with the inverse system size. \\ The aim of this note is to lift the inherent structure stress on a more formal footing which will help us to further clarify its origin and to discuss in what sense it has an influence on computations of viscoelastic properties. To this end, we will present a consistent picture for the proper calculation of low and high frequency shear moduli of glass-forming liquids. This work is organized as follows: In section II we begin our discussion by applying the Irving-Kirkwood formula for IS configurations and tracing back the remaining stresses to the choice of boundary conditions in a formal sense. In section III, we proceed by identifying external mechanisms which bias the computation of shear moduli upon decreasing temperature in a supercooled liquid and provide calculations of these moduli for glass forming liquids. In section IV we summarize our results and conclude with a discussion of the physical meaning of the IS stresses. \section{The Irving-Kirkwood formula for Inherent Structure configurations} \noindent According to Irving and Kirkwood, the instantaneous stress tensor in a configuration with particle mass $m_{i}$, positions $\bm{r}_{i}$ restricted to a volume $V$ is given by \cite{irving} \begin{equation} \label{IK1} \sigma_{\alpha \beta}=\frac{1}{V} \sum_{i=1}^{N} m_{i} v_{i,\alpha} v_{i,\beta} - \frac{1}{2V} \sum_{i=1}^{N} \sum_{j=1}^{N} r_{i j,\alpha} F_{i j,\beta} \ , \end{equation} where $\bm{r}_{ij}=\bm{r}_{i}-\bm{r}_{j}$, $\bm{v}_{i}=\dot{\bm{r}}_{i}$ and $\bm{F}_{ij}$ is the pair forces exerted on particle $i$ by particle $j$. The Greek indices refer to the cartesian component of the corresponding vector/tensor. In an IS configuration (i.e.\ in a mechanically stable packing) the particles are at rest and the inherent structure stress $\sigma^{IS}$ is solely determined by the second part of ($\ref{IK1}$). We assume periodic boundary conditions in a cubic box with side length $L$ i.e.\ we redefine $r_{ij,\alpha}=r_{i,\alpha}-r_{j,\alpha}+n_{ij,\alpha} L$. Here, $\bm{n}_{ij}$ denotes the vector which minimizes the distance between particles $i$ and $j$ where its components are $\pm 1$ or $0$. Inserting the periodic boundary conditions in the configurational part of ($\ref{IK1}$) leads to \begin{multline} \label{IK2} \sigma_{\alpha \beta}^{IS}= - \frac{1}{2V} \left( \sum_{i=1}^{N} r^{IS}_{i,\alpha} \sum_{j=1}^{N} F_{i j,\beta} \right. \\ \left. - \sum_{j=1}^{N} r^{IS}_{j,\alpha} \sum_{i=1}^{N} F_{i j,\beta} + L \sum_{i=1}^{N} \sum_{j=1}^{N} n_{ij,\alpha} F_{i j,\beta} \right) \ . \end{multline} By using Newton's third law $\bm{F}_{ij}=-\bm{F}_{ji}$ and renaming of indices, this can be rewritten as follows. \begin{equation} \label{IK3} \sigma_{\alpha \beta}^{IS}= - \frac{1}{V} \sum_{i=1}^{N} r^{IS}_{i,\alpha} F_{i,\beta} - \frac{L}{2V} \sum_{i=1}^{N} \sum_{j=1}^{N} n_{ij,\alpha} F_{i j,\beta} \ , \end{equation} where $\bm{F}_{i}=\sum_{j=1}^{N} \bm{F}_{i j}$ is the net force acting on particle $i$. This quantity vanishes by definition for an IS. In a simulation it is arbitrarily small in the sense that the magnitude of the net force on a particle is bounded by the smallest force tolerance at which the used minimization algorithm (e.g.\ a conjugate gradient solver) still converges. Therefore, the IS stress is approximately given by \begin{equation} \label{IK4} \sigma_{\alpha \beta}^{IS} \approx - \frac{L}{2V} \sum_{i=1}^{N} \sum_{j=1}^{N} n_{ij,\alpha} F_{i j,\beta} \ . \end{equation} \begin{figure} \includegraphics[scale=0.6]{Fig1Stresscompare_b.pdf}% \caption{\label{fig1} Variance of inherent structure stress in 2D (top, soft sphere system) and 3D (bottom, binary LJ system) both at temperature T=0.5 as a function of system size. Black circles indicate the full Irving-Kirkwood expression and red triangles the approximate boundary formula ($\ref{IK4}$). The insets show the same data on a $\log$-$\log$ scale with a linear fit which has a slope of $-1$. } \end{figure} As we will discuss subsequently, expression ($\ref{IK4}$) has a straightforward physical interpretation. Note, that the $\alpha$-component of the vector $\bm{n}_{ij}$ is only nonzero if a particle close to the $\alpha=L/2$ boundary of the simulation box interacts with the periodic image of a particle residing at the opposite ($\alpha=-L/2$) boundary or vice versa. For instance, if we choose $\alpha=x$, $\beta=y$ in a cartesian coordinate system, expression ($\ref{IK4}$) is nothing else than the total force component in $y$-direction exerted by the right boundary layer of the simulation box on its left counterpart. This shows analytically in which sense the choice of boundary conditions (i.e.\ the shape of the simulation box) determines the IS stress. We test approximation ($\ref{IK4}$) numerically by comparing the mean squared IS stress calculated from the full Irving-Kirkwood expression to the one calculated from approximation ($\ref{IK4}$) by performing molecular dynamics simulation of glass forming systems for different system sizes, temperatures and dimensions (see appendix A for details on the simulations). The total inherent structure stress is very well approximated by equation ($\ref{IK4}$) (see Fig.1). As it can be read off from Fig. 1, the fluctuations of $\sigma_{\alpha \beta}^{IS}$ show a $1/V$ system size dependence, which has already been described described in [\onlinecite{Harro}]. A heuristical explanation is given in appendix C based on equation ($\ref{IK4}$). The investigation of the IS in amorphous materials dates back to ideas of Stillinger and Weber \cite{stillinger} and has proven to have various applications including the investigation of rate processes in low-temperature amorphous substances, the formulation of equations of state for supercooled liquids, macroscopic transport properties etc. (see e.g.\ [\onlinecite{sciortinoI}],[\onlinecite{sciortinoII}] and [\onlinecite{heuerrev}]). While equation ($\ref{IK4}$) seems to identify the IS stress to be a mere (negligible) boundary effect, we will discuss that it is felt throughout the system and has a major influence on macroscopic quantities. Our discussion will focus on the influence of IS stresses on the numerical computation of elastic constants, more specifically on the shear moduli of a glass forming material. \section{Elastic constants} \noindent The shear modulus, $G$, describes the response of a material to shear stress and is defined as the ratio between shear stress to shear strain. If a material is subjected to oscillatory shear deformation, the shear modulus is a function of the excitation frequency $\omega$, i.e.\ $G=G(\omega)$. Its high- and low frequency limits are characteristic for a material's mechanical behavior: the infinite frequency shear modulus, $G_{\infty}$, describes the response to an instantaneous, affine deformation. It is not directly measurable experimentally, since a deformation at truly infinite frequency cannot be applied in practice. $G_{\infty}$ should not be confused with the experimentally reported high frequency modulus which always refers to the shear modulus at the highest obtainable frequencies. \cite{dyre} The temperature dependence of $G_{\infty}$ is relatively weak and its value mainly depends on the microscopic details (atomistic potential) of the system. The zero frequency limit, $G_{0}$, describes the ability of relaxing stresses on a long time scale. Since a liquid in equilibrium does not support any stresses, $G_{0}$ is zero for a liquid in equilibrium but finite for a solid. Therefore, the zero frequency modulus can be regarded as an indicator for solidity. It depends strongly on the thermodynamic state and is therefore sensitive to temperature changes as a material approaches its melting point. The situation is more complicated for glassforming materials as they do not show a sharp solidification transition. In the following, we will summarize and extend previous results for $G_{\infty}$ and $G_{0}$ of a supercooled liquid and discuss their behavior over the full temperature range. Furthermore, we will investigate the contribution stemming from the IS and discuss in what sense it contributes to properties of the low temperature glass. Throughout this section the simulation results are reported for the two-dimensional soft sphere system with $N=512$ particles (see appendix). \subsection{General remarks} \noindent The infinite frequency shear modulus is analytically given by the so-called Born-Green expression \cite{born} which is well defined and yields non-vanishing results in both the solid and the fluid phase. This expression is given by \begin{multline} \label{BG1} G_{\infty}=\rho k_{B} T+ \\ \left \langle \frac{1}{2V}\sum_{i,j}r_{ij,x}^{2} r_{ij,y}^{2} \left( \frac{\phi''(r_{ij})}{r_{ij}^{2}}-\frac{\phi'(r_{ij})}{r_{ij}^{3}}\right) \right\rangle -P \ , \end{multline} for an \textit{isotropic} system with the hydrostatic pressure $P$ and a pair potential $\phi(r)$. Further utilizing isotropy and performing an orientational average this can be simplified to \cite{zwanzig, hess} \begin{equation} \label{BG2} G_{\infty}=\rho k_{B} T + \frac{1}{8V}\left\langle\frac{1}{2}\sum_{i,j} \frac{1}{r_{ij}} \frac{\partial}{\partial r_{ij}} \left[r_{ij}^{3}\phi'(r_{ij}) \right] \right\rangle \ , \end{equation} for a two dimensional system. For a soft sphere system with a purely repulsive pair potential of the type $r^{-n}$ equation ($\ref{BG2}$) leads to \cite{ilyin} \begin{equation} \label{BG3} G_{\infty}=\rho k_{B} T + \frac{n-2}{4}(P-\rho k_{B} T) \ . \end{equation} The low frequency shear modulus is given by equation ($\ref{BG1}$) corrected by the so-called fluctuation term \cite{Squire,MM,hess}, i.e. \begin{equation} \label{BG4} G_{0}=G_{\infty} - \frac{V}{k_{B} T} \left( \left\langle \sigma_{xy}^2 \right\rangle - \left\langle \sigma_{xy} \right\rangle^{2} \right) \ . \end{equation} In the following we will provide an extensive discussion of these quantities for a glass forming liquid in different temperature regimes. \subsection{High temperature liquid} \noindent For temperatures well above the melting point of the system, the low frequency shear modulus vanishes, i.e.\ the system does not sustain non-zero stresses on a long time scale. In this situation the fluctuation term cancels the Green-Born expression and the high frequency shear modulus can be calculated by the shear stress fluctuations: \begin{equation} \label{BG5} G^{L}_{\infty}= \frac{V}{k_{B} T} \left( \left\langle \sigma_{xy}^2 \right\rangle - \left\langle \sigma_{xy} \right\rangle^{2} \right) \ . \end{equation} Note that the ensemble average of the shear stress $\left\langle \sigma_{xy} \right\rangle$ in equilibrium always vanishes. \subsection{Supercooled phase} \noindent Upon further cooling the system below its melting point, the supercooled regime is entered. The fluid is not in its true thermodynamic equilibrium being the crystalline phase but is said to be in a metastable equilibrium in the sense that time translational invariance holds and two time correlation functions (such as the stress-stress autocorrelation function, $C(t)=\left \langle \sigma_{xy}(t)\sigma_{xy}(0)\right\rangle$) decay to zero within experimentally available time windows (see fig.\ 2). As the particle motion becomes increasingly sluggish, relaxation becomes slower and happens on two time scales: vibrational degrees of freedom lead to a fast redistribution of stresses (i.e.\ an initial decay of $C(t)$ to the lowest possible value possible for a particular configuration in place). This process is followed by a slow relaxation associated with particle rearrangements on a mesoscopic scale. As the system is still (quasi-)ergodic in the sense that it finds a way to redistribute stresses such that $C(t)$ fully decays to zero, the formulas for the elastic moduli (equations ($\ref{BG3}$) and ($\ref{BG4}$)) are still valid. However, since the relaxation of stress correlations takes an increasingly long time scale, the liquid assumes a viscoelastic behavior and the low frequency shear modulus departs from its zero value. According to equation ($\ref{BG4}$), this is associated with a decrease in the shear stress fluctuations. It also means that equation ($\ref{BG5}$) is not appropriate to compute the high frequency modulus anymore, but equation ($\ref{BG1}$) has to be used. The fact that the common expression ($\ref{BG5}$) looses its validity in the solid phase has been pointed out by several authors. \cite{Harro, WiliamsI, ilyin} \subsection{Glassy phase} \noindent If the system is further cooled below the glass transition temperature $T_{g}$ a full relaxation of two time correlation functions cannot be observed anymore within the experimentally available time window. The calculation of the shear modulus of a material in this glassy state was extensively discussed by Williams. \cite{WiliamsI, WilliamsII} We briefly recap the physical picture considered by the authors there: The phase space of the system is divided into $N_{D}$ subsystems. Every subsystem is in equilibrium but between the domains the system is out of equilibrium. The probability density of the domain $a$ is given by $f_{a}(\Gamma)=s_{a}(\Gamma)\frac{\exp(-\beta H(\Gamma))}{Z_{a}}$ with $Z_{a}=\int d\Gamma s_{a}(\Gamma) \exp(-\beta H(\Gamma))$, where $\Gamma$ is the phase space coordinate, $H$ the Hamiltonian of the system and $s_{a}$ a switching function which is equal to unity if $\Gamma$ lies in the domain $a$ and zero otherwise. The probability distribution of the entire system is given by a composition of the single-domain distribution weighted with a nonequilibrium weight, i.e.\ $f(\Gamma)=\sum_{a=1}^{N_{D}} w_{a}f_{a}(\Gamma)$ and $\sum_{a=1}^{N_{D}} w_{a}=1$. Equation ($\ref{BG4}$) holds for the (equilibrium) subdomains only. The authors of this study further derived that the infinite frequency shear modulus of the system is given by \begin{equation} \label{BG6} G_{0}=G_{\infty,f} - \frac{V}{k_{B} T} \sum_{a=1}^{N_{D}} w_{a} \left( \left\langle \sigma_{xy}^2 \right\rangle_{a} - \left\langle \sigma_{xy} \right\rangle_{a}^{2} \right) \ , \end{equation} where the subscript $f$ denotes the rule to average the Green-Born expression over the distribution $f$ and the subscript $a$ means an equilibrium average over the domain $a$. This makes an accurate and meaningful calculation of the low frequency modulus for a glass sample a very subtle task as it would require to consider the single phase space domains. Under the assumptions that every simulation is sampling its own, single domain and that the set of prepared samples representatively reflects the distribution of the weights $w_{a}$, one could estimate ($\ref{BG6}$) by simple time averages. This approach was seemingly taken in [\onlinecite{Harro}] and in [\onlinecite{ilyin}]. However, as already pointed out in [\onlinecite{WiliamsI}] this method is very sensitive to the used time over which averages are taken. Therefore, we mention an alternative approach, also presented in [\onlinecite{WiliamsI}]: the low frequency modulus is given by the Green-Born expression minus the stress-stress autocorrelation function $C(t)$ at $t=0$ (equation ($\ref{BG4}$)), which drops to a non-zero plateau value for a broad class of glass forming liquids. This means that the stress fluctuations relative to the frozen-in stresses in this domain are considered. The autocorrelation is now calculated up to a cutoff time $t_{c}$ which is much larger then the relaxation time of the fast processes in the sample. Finally, the fluctuation term in ($\ref{BG4}$)) is corrected by $C(t_{c})$: \begin{equation} \label{BG7} G_{0}=G_{\infty} - \frac{V}{k_{B} T} \left( \left\langle \sigma_{xy}^2 \right\rangle - \left\langle \sigma_{xy} \right\rangle^{2} - C(t_c) \right) \ . \end{equation} It should be noted that this procedure has the advantage that it is not sensitive to the chosen cutoff since $C(t)$ is almost constant for a broad time interval (meaning that ageing of the glass is negligible on the time scale of interest for the investigated model system). The physical picture behind this correction is the following: each individual glass sample contains frozen-in stresses, which are induced due to the initial conditions and preparation procedure of the sample. For instance, a fast cooling protocol pushes a liquid out of equilibrium very rapidly. This does not leave enough time for stress relaxation processes to occur leading to significant stresses in the glass sample which might not be present if a slower cooling rate would have been used. While these residual stresses are sometimes deliberately introduced during the manufacturing process \cite{residual} and influence mechanical/rheological measurements on individual glassy materials, they are not a characteristic property of the material but a remainder of its production process. Correcting for the frozen-in stresses by subtracting the plateau value of the stress-stress correlation function removes this contribution from the low frequency modulus such that $G_{0}$ remains a quantity which is characteristic for the material irrespectively of its history. As we will see in the next section, the IS stress may also bias the computation of the elastic moduli. Since, we want to investigate this effect in more detail, we focus on systems where corrections for the frozen-in stresses in the form of ($\ref{BG4}$) play a minor role only. This is the case for the considered system. While the self-intermediate scattering function does not decay to zero anymore, when the liquid passes the glass transition temperature \cite{model2d}, the stress-stress autocorrelation function $C(t)$ decays to values which are negligible for a potential correction according to equation ($\ref{BG4}$) (see fig.\ 2). The physical reason for this behavior might be, that stress relaxation events happen only in local rare events, but the global stress-stress correlation decays since very few rearrangements on the boundaries of the simulation box lead to a large change in the IS stress contribution as discussed in section I. \begin{figure} \includegraphics[scale=0.55]{coft.pdf}% \caption{\label{fig2} Stress-stress autocorrelation function $C(t)=\left \langle \sigma_{xy}(t)\sigma_{xy}(0)\right\rangle$ for high ($T=0.6$, black) and low temperature ($T=0.3$, red). While at $T=0.6$ the systems starts to develop a two step relaxation process, a pronounced plateau can be observed at $T=0.3$. Note, that for the system under consideration $C(t)$ drops to values close to zero even at temperatures below the nominated glass transition temperature $T \approx 0.35$. See text for an explanation. Numerical data have been fit by a stretched exponential $a \exp(-bt^{c})+d$ for long time scales.} \end{figure} \subsection{Low temperature limit} \noindent The low temperature limit of the amorphous solid is of wide interest in the research community and subject to extensive scientific effort (for an overview see the corresponding sections in [\onlinecite{dynamic})]. In the following, we will extend our discussion on the calculations of the elastic properties to the low temperature regime. We will use the present considerations about the shear moduli as a tool to identify different mechanism contributing to the solidification process of a glassy material and clarifying the role of the IS stress played in it. For a subdomain $a$ which is assumed to be in equilibrium, we follow the calculation of Lutsko \cite{Lutsko} and compute the shear stress fluctuation in the canonical ensemble in the low temperature limit. \begin{equation} \label{BG8} \left\langle \sigma_{xy}^2 \right\rangle_{a}=\frac{1}{Z_{a}} \int d\bm{r} s_{a} \sigma_{xy}^2 \exp \left( -\frac{\Phi}{k_{B}T}\right) \ , \end{equation} where $Z_{a}= \int d\bm{r} s_{a} \exp \left( -\frac{\Phi}{k_{B}T}\right)$ and $\Phi$ is the potential energy of the system. We expand both potential energy and stresses around its inherent structure, i.e., \begin{multline} \label{BG9} \Phi=\left. \Phi \right|_{\bm{r}=\bm{r_{i}}^{IS}}+\left. \frac{\partial\Phi}{\partial r_{i,\alpha}}\right|_{\bm{r}=\bm{r_{i}}^{IS}}(r_{i,\alpha}-r_{i,\alpha}^{IS})+ \\ \frac{1}{2} \left.\frac{\partial^{2}\Phi}{\partial r_{i,\alpha}\partial r_{j,\beta}}\right|_{\bm{r}=\bm{r_{i}}^{IS}}(r_{i,\alpha}-r_{i,\alpha}^{IS})(r_{i,\beta}-r_{i,\beta}^{IS})+... \ , \end{multline} and \begin{multline} \label{BG10} \sigma_{xy}=\sigma_{xy}^{IS}+\left. \frac{\partial\sigma_{xy}}{\partial r_{i,\alpha}}\right|_{\bm{r}=\bm{r_{i}}^{IS}}(r_{i,\alpha}-r_{i,\alpha}^{IS})+ \\ \frac{1}{2} \left.\frac{\partial^{2}\sigma_{xy}}{\partial r_{i,\alpha}\partial r_{j,\beta}}\right|_{\bm{r}=\bm{r_{i}}^{IS}}(r_{i,\alpha}-r_{i,\alpha}^{IS})(r_{i,\beta}-r_{i,\beta}^{IS})+... \ , \end{multline} where we have used the summation convention for indices. In the following, we will use $\Phi_{n}^{IS}$ and $\sigma_{xy,n}^{IS}$ as a short notation, where the superscript means that the expression has to be evaluated at the inherent structure configuration and the number in the subscript refers to the $n$-th derivative. Transforming to re-scaled coordinates $r_{i,\alpha}-r_{i,\alpha}^{IS}=(k_{B}T)^{1/2}r'_{i,\alpha}$ and inserting ($\ref{BG9}$) and ($\ref{BG10}$) into equation ($\ref{BG8}$) yields \begin{multline} \label{BG11} \int s_{a} \left( \sigma_{xy}^{IS}+ \sqrt{k_{B}T}\sigma_{xy,1}^{IS} r'+ ... \right)^{2} \\ \exp \left( -\frac{1}{2}\Phi_{2}^{IS}r'r' \right) \exp \left(-\frac{\sqrt{k_{B}T}}{6}\Phi_{3}^{IS}r'r'r' -... \right) d\bm{r'} / \\ \int s_{a} \exp \left( -\frac{1}{2}\Phi_{2}^{IS}r'r' \right) \exp \left(-\frac{\sqrt{k_{B}T}}{6}\Phi_{3}^{IS}r'r'r' -... \right) d\bm{r'} \ . \end{multline} Expanding the second exponential factor in powers of $k_{B}T$ (in both denominator and enumerator), all integrals are Gaussian integrals, which are solvable analytically. Note that odd moments of these Gaussian integrals vanish. Finally, expanding the quotient in equation ($\ref{BG11}$) in powers of $k_{B}T$ leads directly to the following result for the stress-stress fluctuation in the low temperature limit: \begin{multline} \label{BG12} \frac{V}{k_{B}T} \left\langle \sigma_{xy}^2 \right\rangle_{f} \approx \\ V \sum_{a} w_{a} \left(\frac{(\sigma_{xy}^{IS,a})^2}{k_{B}T} + A + B k_{B}T + \mathcal O((k_{B}T)^{2}) \right) \ , \end{multline} where $\sigma_{xy}^{IS,a}$ is the inherent structure contribution from particles in subdomain $a$ and the subscript $f$ has the same meaning as in equation (($\ref{BG6}$)). At this point we have to make further assumptions in order to estimate the low temperature limit of the fluctuation term. Again we are left with the problem of identifying the different subdomains in order to estimate the weights $w_{a}$ and to sum over $\left\langle \sigma_{xy}^2 \right\rangle_{a}$ accordingly. We will make the previously mentioned assumption that the prepared samples reflect the distribution of these weights and that each simulation predominantly samples its own single domain such that the total inherent structure stress of one simulation, $\sigma_{xy}^{IS}$ approximates $\sigma_{xy}^{IS,a}$. Therefore, the low temperature limit of the fluctuation term is estimated by ensemble averaging over statistically independant starting configurations. Hence, the quantity $A$ is the linear term in the perturbation expansion and given by \begin{multline} \label{BG13} A = \left( \frac{\partial \sigma^{IS}_{xy}}{\partial r_{i\alpha}} \frac{\partial \sigma_{xy}^{IS}}{\partial r_{j\beta}}\right) \langle\langle r'_{i\alpha}r'_{j\beta} \rangle\rangle \\ -\frac{1}{3}\sigma^{IS}_{xy}\left( \frac{\partial^{3}\Phi}{\partial r_{i\alpha}\partial r_{j\beta}\partial r_{k\gamma}} \frac{\partial \sigma_{xy}^{IS}}{\partial r_{l\delta}}\right) \langle\langle r'_{i\alpha}r'_{j\beta}r'_{k\gamma}r'_{l\delta} \rangle\rangle \\ + \sigma^{IS}_{xy}\left( \frac{\partial^{2}\sigma^{IS}_{xy}}{\partial r_{i\alpha}\partial r_{j\beta}} \right) \langle\langle r'_{i\alpha}r'_{j\beta} \rangle\rangle \ . \end{multline} The contributions to the linear term $B$ are discussed later. We have introduced the Gauss bracket of a function $g$ which is defined as follows $\langle \langle g(\bm{r}') \rangle \rangle= \frac{1}{c} \int d\bm{r}' g(\bm{r}') \exp \left( -\Phi_{2}^{IS}\bm{r}'\bm{r}' \right)$, where $c$ is a normalization constant such that $\langle \langle 1 \rangle \rangle=1$ holds. Note, that \begin{equation} \label{BG14} \langle \langle r_{i\alpha}r_{j\beta} \rangle \rangle= \left( (\Phi_{2}^{IS})_{i\alpha,j\beta} \right)^{-1} \, \end{equation} where the right hand side is nothing but the inverse of the Hessian matrix. The Hessian matrix has zero eigenvalues due to the fact the the net force on the system is zero and is therfore not invertible. A commonly used solution to this issue is, physically holding one particle fixed (i.e.\ excluding it from the sums in all calculations), which has no effect on the free energy. \cite{hoover} All higher moments like $\langle \langle r_{i\alpha}r_{j\beta}r_{k\gamma}r_{l\delta} \rangle \rangle$ can be traced back to ($\ref{BG14}$) using Wick's theorem, e.g.: \begin{multline} \label{BG114b} \langle\langle r_{i\alpha}r_{j\beta}r_{k\gamma}r_{l\delta} \rangle\rangle= \langle \langle r_{i\alpha}r_{j\beta} \rangle \rangle \langle \langle r_{k\gamma}r_{l\delta} \rangle \rangle + \\ \langle \langle r_{i\alpha}r_{k\gamma} \rangle \rangle \langle \langle r_{j\beta}r_{l\delta} \rangle \rangle + \langle \langle r_{i\alpha}r_{l\delta} \rangle \rangle \langle \langle r_{j\beta}r_{k\gamma} \rangle \rangle \ . \end{multline} We find that the first term in equation ($\ref{BG12}$) is exactly the inherent structure contribution to the stress fluctuations. Since these fluctuations are essentially temperature independent, this term would lead to a divergence of the low frequency shear modulus. This means that, under the constraint of the given containment (or the boundary conditions of the simulation box) the particles possibly cannot be packed in a way which zeros the total stress as discussed in the previous section. We note that this constraint is imposed on the entire box so it does not affect the results of local elastic quantities where the subvolume is embedded in a larger system (e.g.\ see [\onlinecite{JLB}] or [\onlinecite{pab}]). At high temperatures this term also does not play an important role: for formal reasons due to the {1/T} prefactor, for physical reasons due to permanent stress redistributions activated by thermal motion occurring at high temperatures. These stress fluctuations are usually much larger than its IS contribution (see figure ($\ref{fig3}$)). \begin{figure}[h!] \includegraphics[scale=0.65]{rel.pdf}% \caption{\label{fig3} Fraction of stress variance stemming from the IS contribution, increasing over $50\%$ when the system enters the supercooled regime. At this point stress redistributions through thermal activation become so slow that the external constraint, set by the boundary conditions becomes non-negligible and requires a correction to the shear moduli.} \end{figure} We identify the IS contribution to the shear stress fluctuation as \textit{geometric} frustration of the system, accompanying the \textit{kinematic} frustration, which has been discussed in the previous section on the nonequilibrium glassy state. Again, this contribution is not an inherent material property but reflects external constraints on the relaxation dynamics becoming significant at low temperatures. The second term in ($\ref{BG12}$) is temperature independent and the benefit of equation ($\ref{BG13}$) is that, we can read off how the stress fluctuations obtained from a simulation should be corrected to extract the true zero temperature properties of the material when frozen-in stresses can be neglected. Alternatively, the material properties can be calculated from the first term in ($\ref{BG13}$) which does not contain the IS shear stress itself but only its derivative. The latter is nonzero irrespectively of any constraint on the system. This term has already been deduced in [\onlinecite{Lutsko}], however it has so far not been numerically tested for amorphous systems to the best of our knowledge. Even though equation ($\ref{BG12}$) is strictly speaking only valid in equilibrium at low temperatures, we propose to correct the zero frequency shear modulus even in the glassy regime, since the inherent structure stress plays a predominant role already at this temperature range as can be seen in figure $\ref{fig3}$. The second term, $B$, in equation ($\ref{BG12}$) describes the slope at which the fluctuation term departs from its zero temperature limit. Among all terms contributing to this order of $(k_{B}T)$ we neglect those which contain IS contributions (see Appendix B) and obtain the temperature dependence of the fluctuation term close to $T=0$. \\ As a conclusion, we have obtained a detailed picture of the computation of the high and low frequency shear moduli of a glass forming system: While in the high temperature regime the fluctuation term cancels the infinite frequency shear modulus, it decreases in the supercooled regime. This means the the glass forming material develops an elastic behavior. Due to the non-thermal IS contribution the fluctuation term would increase again upon further cooling in the glassy phase. As we correct for this contribution, we observe a small decrease of the fluctuation term towards the low temperature limit (see figure $\ref{fig4}$). The difference between $G_{\infty}$ and the fluctuation term is depicted in figure 5. At high temperatures $G_{0}$ is essentially zero meaning that the fluid does not support stresses on a long time scale. Entering the supercooled regime the material behaves elastically, which is mirrored by an increase of the zero frequency modulus. As the temperatures is reduced further this elastic behavior becomes more pronounced (see figure $\ref{fig5}$). \begin{figure} \includegraphics[scale=0.65]{gzero4.pdf}% \caption{\label{fig4} High frequency shear modulus (blue, open circles) according to equation ($\ref{BG3}$). Filled, black circles show simulation results for the fluctuation term from ensemble averaging with the proper IS corrections applied (see text). The red square is the zero temperature limit according to the first term in equation ($\ref{BG13}$). The red line corresponds to perturbation theory results for the fluctuation term of the Born-Green theory. The low frequency shear modulus is given by the difference between black and blue data points. We distinguish four temperature regimes: Regime $I$ is the high temperature regime at which equations ($\ref{BG1}$) and ($\ref{BG5}$) hold equally to calculate the high frequency modulus. In regime $II$ the system is supercooled. The fluctuation does not fully cancel $G_{\infty}$ anymore meaning that the material develops an elastic behavior. Regime $III$ is the glassy state at which both frozen-in and IS stresses bias the calculation of the fluctuation term and are corrected to extract inherent material properties irrespectively of external constraints or history dependence. Regime $IV$ is the low temperature glass phase at which the fluctuation term can be computed with equation ($\ref{BG12}$)} \end{figure} \begin{figure} \includegraphics[scale=0.65]{gre0.pdf}% \caption{\label{fig5} Zero frequency shear modulus obtained from subtracting the fluctuation term from the high frequency shear modulus according to equation ($\ref{BG4}$). Black circles show simulation results with proper IS corrections applied and the red square is the zero temperature limit according to the first term in equation ($\ref{BG12}$). The red line corresponds to perturbation theory results for the fluctuation term of the Born-Green theory. } \end{figure} \label{IK10} \section{Conclusion} \noindent In this note we discussed the origin of the inherent structure stress and traced it back to the particular choice of boundary conditions. In view of equation ($\ref{IK4}$) the IS stress can be understood as the net forces penetrating the boundary of the simulation box. Notably, the formula ($\ref{IK4}$) exactly coincides with the so-called ``Method of Planes'' definition of the stress evaluated at the boundaries of the simulation box. \cite{mop1,mop2} This analytic expression allowed us to explain the scaling of the IS stress with the system size (see appendix). Moreover, it might serve as starting point for understanding other properties of the IS stress, e.g.\ its temperature independence or its absolute magnitude. Finally, we would like to address the question which physical meaning lies behind the IS stress. It is obvious that in a sufficiently low temperature regime the configurational part of the stress tensor becomes dominant over the kinetic contribution. Starting from the Green-Kubo formula for the viscosity\cite{zwanzig}, $\eta=\frac{V}{k_{B}T}\int_{0}^{\infty}\langle \sigma_{xy}(t)\sigma_{xy}(0) \rangle dt$ , it has been noted that the IS stress autocorrelation describes the onset of highly viscous behavior as a liquid enters its supercooled regime.\cite{Harro} Hence, as the system explores the potential energy landscape, hopping from one minimum to another, the underlying IS stress fluctuations determine its viscosity in the linear response regime. In view of equation ($\ref{IK4}$) this means that a rearrangement of forces at the boundary of the system contains enough information to characterize the inherent structure which the system currently resides in. However, two caveats seem to come along with this notion. First, the IS stress itself does not provide any obvious information on which time scale its autocorrelation will decay or at which frequency the hopping between the energy minima occurs. It is merely associated with a geometric frustration which is imposed on the system by choosing particular boundary conditions. Secondly, one should not conclude from ($\ref{IK4}$) that cooperative rearrangements taking place at the transition from one minimum to another primarily happen at the boundary of the system. They occur anywhere in the system but the energy minimization zeroes all net forces on particles locally, leading to a redistribution of the boundary forces. In this sense the boundary conditions introduce a frustration which is present everywhere in the system. In view of the previous discussion the following, overall picture for the role of the IS stress emerges: the IS stresses and the redistributions of forces might amount to a macroscopic, scalar observable which characterizes the inherent structure such that its autocorrelation mirrors the path of the system through its configuration space, but does not contribute to the viscosity via any associated timescale as it does not contain dynamical information itself. However, it affects the elastic properties of the system as it leads to a diverging contribution to the fluctuation term of the Born-Green theory. The reason for this behavior is that the IS contribution to the fluctuation term is of a non-thermal origin. If this term is divided by $k_{B}T$, it leads to a divergence of the fluctuation term and therefore to results for the elastic moduli which are biased by the external constraint imposed on the system. Since every rheological (or mechanical) measurement goes together with a change of the boundary of the system and as there is no experimental indication for a low-temperature divergence of the infinite frequency shear moduli in glassy systems \cite{Zhang,jiang}, it seems appropriate to remove the IS contribution to the shear modulus in order to correct for external constraints in the preparation procedure of the IS. \begin{acknowledgments} We thank Hans-Christian {\"O}ttinger and Jean-Louis Barrat for insightful discussions and gratefully acknowledge the Swiss National Science Foundation for providing funding under Grant No.\ 200021\_134626. \end{acknowledgments}
{ "timestamp": "2015-04-14T02:15:42", "yymm": "1504", "arxiv_id": "1504.03216", "language": "en", "url": "https://arxiv.org/abs/1504.03216" }
\section{Introduction} During the past 65 years, the search for extraterrestrial intelligence (SETI) has proceeded with innovative attempts to detect the generation of light from civilizations residing elsewhere in the universe, e.g., \citep{Cocconi_Morrison1959, Drake1961, Tarter2001, Siemion2013}. Searches have been conducted using a variety of wavelengths, most prominently with radio telescopes sensitive to centimeter wavelengths with Arecibo (L band), at 1/10 meter wavelengths with the Greenbank radio telescope, and at centimeter wavelengths with the Allen Telescope Array \citep{Drake1961, Werthimer2001, Korpela2011, Tarter2011}, some targeting arbitrary stars and galaxies and recently aimed toward known exoplanets \citep{Siemion2013}. Searches at optical wavelengths (``OSETI'') have also been conducted, both for continuously transmitting lasers and for sub-$\mu$s-duration light pulses \citep{Reines2002, Wright2001, Howard2004, Stone2005, Howard2007}. To date, no convincing evidence of other technological civilizations has been found. New SETI efforts have begun that employ new wavelengths and achieve greater sensitivity. One search was conducted at mid-infrared wavelengths to detect the waste thermal emission from the vast machinery of advanced civilizations, often called ``Dyson spheres'' \citep{Wright2014_J}. Some SETI efforts involve detecting the dimming of starlight as planet-sized technological constructs pass in front of stars \citep{Walkowicz2014, Forgan2013}. Future SETI efforts will take advantage of the next generation of large radio telescopes, including the Square Kilometre Array \citep{Siemion2014}. There are also proposals to broadcast (rather than just receive) bright beacons as Galactic marquees to advertise our human presence. But international agreements are needed before we compose and transmit brilliant messages moving irreversibly at the speed of light, with unknown consequences \citep{Shostak2013, Brin2014}. Recently, there has been increased interest in searches for extraterrestrial optical and near-infrared lasers, including those emitting short-duration pulses or periodicities \citep{Howard2004, Howard2007, Korpela2011, Drake2010, Mead2013, Covault2013, Leeb2013, Gillon2014, Wright2014_S}. The advantages of interstellar communication by optical and IR lasers include the ease in producing high intensity, diffraction-limited beams to transmit over Galactic distances, and achieving relative privacy with high data rates exceeding 10$^{\rm12}$ bits per second \citep{Townes_Schwartz1961, Wright2001}. Military and commercial lasers have demonstrated continuous-wave power near 30 kW, and 100 kW power is planned \citep{lockheed2014}. Astronomers inadvertently shoot laser beams toward interesting astronomical objects, including exoplanets and the Galactic center, by employing laser guide star adaptive optics on large telescopes with typical power of $\sim$7 W. NASA has demonstrated the use of pulsed infrared lasers for Earth-to-Moon communication at a data rate of 622 Megabits per second \citep{Buck2013}. Optical and IR lasers would also serve for satellite-satellite data transfer. For communication between more distant celestial bodies, optical and IR lasers may be especially useful due to the tight beam, offering enhanced energy efficiency and reduced eavesdropping. In this current paper, we search for laser emission coming from spatial regions separated from stars by tens to hundreds of AUs in projection, where the laser light does not have to compete with the starlight. Section \ref{sec:sens} contains a further discussion of the relative strengths of lasers needed for detection by our method. An advanced civilization might emit many lasers of varying beam sizes, constrained by available optical technology, resulting in unavoidable spill-over at the receiver. The laser light would continue propagating in the original direction, past the intended receiver. The filling factor of such beams in the Milky Way Galaxy is completely unknown. More purposefully, an advanced civilization seeking to indicate its presence to nearby habitable planets might use lasers to offer a bright and unambiguous beacon of its presence. Natural astrophysical conditions exist that allow gas to lase in the near-infrared. But the only observed astrophysically pumped lasers at optical wavelengths occur in non-LTE conditions that produce population inversions in neutral oxygen, causing extraordinary emission at [O I] 8446 \AA, \citep{Johansson2004, Johansson2005}. The presence of a bright, unresolved emission line at any other optical wavelength would be worthy of further attention, including the possibility of a non-astrophysical source. The past 20 years of discoveries of exoplanets motivate a new perspective about SETI. To date, over 1500 exoplanets have been confirmed with accurate orbits, and another 3300 planet candidates have been identified by {\it Kepler} as likely real but still need confirmation at the 99\% confidence level \citep{Rowe2014a, Rowe2014b, Mullally2014, Wright2011_J}. Stars with known exoplanets make excellent targets for SETI searches, and such planetary systems have already been surveyed for technological transmissions at radio wavelengths \citep{Siemion2013, Gautam2014}. Planetary systems found by {\it Kepler} are particularly valuable for SETI searches because their edge-on orbital planes enhance the probability of our detecting spillover transmissions between planets and satellites within that planetary system\footnote{The powers needed to transmit within a planetary system are clearly dwarfed by those needed over interstellar distances.}. However, {\it Kepler} showed that over 50\% of both solar-type stars and M-dwarf stars have planets within 1 AU of the host star \citep{Petigura2013, Dressing2013}. Planets smaller than 1.5 Earth-radii are common and often harbor a largely rocky interior \citep{Weiss2014, Rogers2014, Wolfgang2014, Marcy2014}. Thus, all FGKM-type stars have comparable value as targets for SETI searches, whether or not they have known exoplanets, as the majority of stars harbor planets of 1-2 Earth-radii within a few AU \citep{Howard2012, Petigura2013}. Only binary stars separated by less than a few AU are probably poor sites for SETI searches, due to a lack of stable planetary orbits. Otherwise, planets with liquid water may permit progressively complex organic chemistry toward the nucleotides and duplication of RNA within fatty acid proto-membranes \citep{Adamala2013, Szostak2012}. In Section~\ref{sec:hires} we describe the selection of target stars and the spectroscopic instrumentation to detect optical laser emission lines. In Section~\ref{sec:algorithm} we describe our laser line detection algorithm. In Section~\ref{sec:results} we present the results of our search for laser lines and in section \ref{sec:sens} we specify the detection thresholds of laser emission and the strength of transmitters to which we are sensitive. Finally, in Section~\ref{sec:Disc} we offer a discussion of the results in the context of the extraterrestrial transmission of laser beacons. \section{Target Stars and Spectroscopic Search for Laser Emission} \label{sec:hires} \subsection{Target Stars} \label{sec:targets} All 2796 target stars in this search for laser emission were stars for which high resolution spectra had already been obtained at the Keck Observatory as part of a study of their exoplanets. There were two populations of target stars drawn from our two broad exoplanet programs. The first population of targets stems from the California Planet Search (CPS) that continue to make repeated Doppler measurements of over 3000 FGKM main sequence stars brighter than Vmag = 8.5 and northward of declination -25 deg, ongoing for the past 10-20 years, using both the Lick and Keck Observatories \citep{Fischer2014, Howard2014, Marcy2008, Wright2011_J}. Numerous papers have been written about the detections and properties of the exoplanets found among these 3000 stars, e.g., \citep{Marcy2008, Johnson2011, Howard2014}. Roughly 10\% of these stars have known planets detected by the RV method \citep{Cumming2008, Howard2010}. All M dwarfs brighter than Vmag=11 are also being followed typically with several Doppler measurements per year. The CPS exoplanet survey continues primarily with the Keck 1 telescope now that the iodine cell at the Lick Observatory 3-m Shane telescope has been retired. For the second population of targets, we had taken spectra with the Keck 1 HIRES spectrometer of all 1100 {\em Kepler} Objects of Interest (KOI) that are brighter than Kepmag=14.2. We also have spectra of another 200 fainter KOIs that harbor 4, 5, or 6 transiting planets. These 1300 KOIs are predominantly FGKM main sequence stars identified by {\em Kepler} as likely harboring one or more planets, and over 90\% of them have real planets, as opposed to false positives \citep{Morton2011}. Nearly all of the multi-transiting planet systems are already confirmed as real planets \citep{Lissauer2014, Rowe2014a}. A list of these KOIs and measurements of each star's effective temperature (Teff), surface gravity (log g), iron abundance ([Fe/H], and projected rotational velocity (Vsini) are provided by \citep{Petigura2015, Howard2015}. Most spectra were taken with an exposure meter that stopped the exposure when a pre-set number of photons was received per pixel, with the typical goal of achieving a signal-to-noise ratio in the reduced spectrum of 100-200. The resulting exposure times varied from 1 to 45 minutes to accommodate the factor of $\sim$100 range in brightnesses of the target stars, which in turn depend on their intrinsic luminosity in visible light and their distance. The entrance slit of the HIRES spectrometer was oriented perpendicular to the horizon using an optical image rotator. Thus, the orientation of the slit was neither N-S nor E-W in equatorial coordinates, but depended on the Hour Angle at the time of observation and the Declination. These spectra of 3000 main sequence and subgiant stars offer a fresh opportunity to search for pulsed and continuous optical laser emission. We carried out a similar search for unresolved laser lines in HIRES spectra on a small subset of the stars surveyed here \citep{Reines2002}. Our present sample of stars is much larger, including KOIs, and the laser-line search algorithm is greatly improved, operating on raw CCD images rather than reduced spectra. The first target population of nearby FGKM stars are brighter than Vmag=11, with most brighter than Vmag=8.5. The typical exposure times to obtain the spectra were 1 minute at Vmag=7 and 8 minutes at Vmag=10, varying by factors up to three depending on clouds and seeing. The {\em Kepler} target stars typically have ($Vmag = 11-13$) forcing exposure times of 20 - 45 minutes, depending on brightness, clouds, and seeing. The longer exposure times of the fainter population of stars permits detection of laser emission arriving with lower flux at the Keck telescope to achieve requisite integrated photons for detection (see Section~\ref{sec:sens}). We also obtained spectra of a small number of nearby galaxies, supernovae, and planetary nebulae motivated by isolated projects of special interest. In this work, we only examined those spectra (of the two populations of stars) obtained with the 14x0.87 arcsec, ``C2'', entrance slit at the HIRES spectrometer. The long 14 arcsec slit enables examination of the region angularly near the star (``sky'') to detect laser emission with no competition from the star's light. No filters were used during the exposures, notably the KV370 filter that removes the light shortward of 370 nm was not used. A small amount of UV light from second-order diffraction off the cross disperser leaks onto the CCD longward of nominal 600 nm (300 nm, second order). Many observations used here had the iodine cell in place, but this does not affect the search for laser lines displaced spatially from the star. We have a total of 14,380 spectra of 2796 stars obtained with that C2 decker, with many stars having multiple spectra of them taken between 2004 and 2014. The 2796 target stars are composed of 1,368 KOIs with the remaining targets being nearby FGKM stars on the CPS exoplanet program, and some additional planet searching targets. Figure \ref{fig:targetRAandDEC} shows the location of all 2796 target stars for this SETI search, in the domain of their coordinates, RA and DEC. The figure shows that target stars are located at all RA and are mostly north of DEC $>$ -30 deg. There is a concentration of 1368 targets in the {\em Kepler} field of view between RA = 19-20 and DEC = 35-50 deg. \\ \subsection{HIRES Spectrometer setup} \label{sec:HIRES_setup} All 14,380 spectra were obtained with the HIRES spectrometer on Keck 1 \citep{Vogt1994}. The spectra span wavelengths 3640 and 7890 \AA \, split among 49 spectral orders that fall on three 2048 by 4096 pixel CCDs. The separation of the spectral orders varies from 6 arcsec in the near UV to 43 arcseconds in the far red \citep{Griest2010, Vogt1994}. In order to diminish the CCD readout time, three CCD pixels are binned in the spatial dimension on chip. The spectral resolution (FWHM, $\Delta \lambda$, of the instrumental profile) is $\lambda/\Delta \lambda$=60,000, i.e., 5 km s${-1}$). Each pixel spans 1.3 km s${-1}$ in the wavelength direction and 0.38 arcsec (after binning) in the spatial direction. The search for laser lines was performed on the raw CCD images, not the reduced spectra. We specifically searched for laser emission in the spatial region between 2-7 arcsec from the star in both directions along with length of the slit. All raw CCD images of the spectra use here are available to the public at the Keck Observatory Archive. We show one example of the raw CCD image in Figure \ref{fig:keckspec}. The use of a long 14x0.87 arcsec C2 entrance slit placed a spectrum of the ``sky'' on either side of the star, making it possible to probe that sky for laser emission coming from nearby region of the star with little contamination from the starlight itself. In this paper, "rows" denote pixels along the direction of the dispersion of the spectrum and ``columns'' denotes pixels along the spatial direction perpendicular to dispersion, as shown in Figure \ref{fig:keckspec}. \subsection{Laser Emission Properties} \label{sec:laser} We define properties of ``laser emission'' for this search as sources located so far away that they would be spatially unresolved as viewed by the Keck telescope on Mauna Kea, corresponding to an angular size smaller then $\sim$0.8 arcsec, which is set by the atmospheric seeing. Any laser-emitting machine having a projected size of $L$ would be unresolved if located at a distance greater than, $d > 2\times 10^5 L$. Machines with sizes measured in tens of meters would obviously be unresolved at the distances of the nearest stars. We restrict our search to such spatially unresolved sources. We also define ``laser emission'' to be emission with a linewidth negligible compared to the resolution of the spectra we obtained with Keck HIRES, $R = \lambda/\Delta\lambda = 6\times10^4$. Here, $\Delta\lambda$ is the FWHM of the typical instrumental profile of HIRES as used here. For comparison, typical modern continuous wave (CW) lasers have linewidths limited by the coherence length, the Doppler effect of the atoms, and the mechanical stability of the laser cavity, yielding linewidths smaller than 1 GHz, i.e. monochromatic with $\lambda/\Delta\lambda > 1\times10^6$, rarely reaching the fundamental quantum Schawlow-Townes laser linewidth of under 1 kHz. (Pulsed dye lasers have much broader linewidths.) For reference, HeNe lasers with wavelength 632.8 nm and a linewidth of 1MHz ($\Delta\lambda = 1\times10^{-6}$ nm) are commercially available and unresolvable by the HIRES spectrometer. As another reference, a typical CW laser guide star for adaptive optics systems has an output power of 20 W and operates at the blueward sodium D-line at a wavelength, $\lambda$=589.1 nm, and a linewidth of 5 MHz, many orders of magnitude narrower than resolvable by HIRES. Thus, a necessary condition to be deemed ``laser emission'' here is a line width narrower than HIRES can resolve. Any actual extraterrestrial lasers with linewidths greater than the resolution of HIRES will not be identified in this search. Such broadband signals could arrive from technological civilizations, but this current work will not detect it. Any such technological broadband emission is more difficult to distinguish from the kinematically or thermally broadened lines from naturally occurring astrophysical sources. Any laser source spatially separated from the star by more than the seeing disk radius in the spatial direction will appear in the telescope focal plane as a separate, unresolved image that will not overlap with star light. We thus focus our attention here on portions of the spatial profile separated from the star by the seeing profile. As the seeing is typically 0.8 arcsec at optical wavelengths at Mauna Kea, we are examining here the 4.5 arcsec of slit real estate commonly called ``sky'', located more than 2 arcsec away from, and on either side of, the stellar spectrum. A potential laser would be effectively a point source in space and wavelength. We expect it to appear on the raw CCD image as a ``dot'', with a two-dimensional point spread function (2-D PSF) shape corresponding to the seeing disk in the spatial direction and to the spectral instrumental profile of HIRES in the wavelength direction. Such dots would appear in the ``sky'' region of the slit image where typically only a few photons hit, allowing the laser to stand out against that faint background. A pulsed laser would also be detectable as a dot, provided the pulse duty cycle and pulse energies produced a time-averaged power sufficient to yield the threshold photons during an exposure, as described in Section~\ref{sec:sens}. \subsection{Specifying the 2-D PSF of Candidate Laser Lines} \label{sec:method} We search for laser emission by examining the three ``raw'' CCD images written by HIRES after each exposure. We do not use reduced spectra. This allows us to search for laser emission ``dots'' consistent with the shape of the PSF both spatially and spectrally, as defined above in Section~\ref{sec:laser}. This two-dimensional search is one key difference between this work and the earlier effort \citep{Reines2002}, which made use of reduced spectra and therefore did not benefit from spatial PSF information. In addition, by retaining the spatial information, we more easily distinguish unresolved laser sources from spatially resolved sources such as nebulae, extended galaxies, and night sky emission lines. In broad outline, our algorithm treats each observation separately by measuring its PSF in both the spatial and spectral directions as described in Section~\ref{sec:algorithm}. The code then steps pixel-by-pixel along the ``sky'' pixels located spatially adjacent to stellar spectral order and performs diagnostics on subimages to identify laser line candidates. The point spread function (PSF) is modeled as a 2-dimensional Gaussian in the spatial and wavelength directions, as shown in Figure \ref{fig:interior}. The spatial and wavelength full-width-half-maxima (FWHM) are set by measuring the average FWHM of the spectral orders containing the stellar spectrum and the average FWHM of the telluric night sky emission lines at various wavelengths, respectively. The FWHM in both directions varies by 5-10\% over the echelle spectrum format due to the performance of HIRES optics over the entire echelle field of view. This variation plays only a minor role in our criteria for goodness-of-fit of any detected laser lines due to our relaxed standards for accepting candidate laser lines, as described below. The FWHM of the spatial profile varies by 6$\%$, RMS, over the echelle format in a given exposure due mostly to camera optics. Of course the spatial profile changes greatly from hour to hour and night to night as atmospheric seeing conditions vary. We account for this variability by measuring the width of the stellar spectrum over the echelle format in each exposure. We measure the spatial FWHM in 10 evenly spaced locations within each spectral order. At each location we bin 11 columns \footnote{Columns containing outlier count levels, often due to cosmic rays, are identified and ignored in the binning}. We then fit a Gaussian to the resulting binned spectrum, and obtain a value for the FWHM at the location. The result is a set of 10 FWHMs for each spectral order. To get the local FWHM within an order, we linearly interpolate between the ten points. The error imparted by the linear interpolation is less than a percent, and is accounted for later in Section \ref{sec:testing}. The FWHM of the instrumental profile in the wavelength direction is measured from night sky emission lines that are intrinsically unresolved at our resolution of $R=6\times10^4$. We use the atmospheric [OI] lines at 6300\AA \ and 5577\AA, which are present in all observations, are quite bright, and vary in FWHM by less than a few percent from night to night due to our vigilance focusing the HIRES spectrometer. These night sky lines provide an acceptable proxy to a delta function in wavelength, as their line widths are well below the wavelength resolution achieved by HIRES \citep{Osterbrock1996}. To characterize the FWHM in wavelength, the rows containing the night sky lines are binned together (less those with significant contributions from either the stellar spectrum or cosmic ray hits). A Gaussian is then fit to the resulting binned vector. The average of these two values for the FWHM - at 6300\AA \ and 5577\AA \ - provide an approximation to the FWHM of the instrumental profile on a given night. Some other atmospheric lines on the redder end of the spectrum were considered for use, but most of them have greater variability in their brightness and they change with time of day and season. Nonetheless, the FWHM of the instrumental profile varies by $\sim$10\% over the echelle format due to the normal optical performance over the full field of view, and we account for this variation in Section \ref{sec:testing}. Rather than construct a 2-D PSF anew for each wavelength region, we generate a small set of PSFs that vary in the spatial direction, as observed. As we scan the CCD for laser emission, we draw from a library of PSFs rather than generating them anew at each location. While there is some loss of precision by drawing PSFs this way, we tested the loss of detectability of laser emission that we incur and found that the loss in sensitivity was marginally significant due to our use of simplified PSFs. \subsection{Location of the Stellar Orders} We aim to search for laser emission that is spatially separated from the stellar image. Each spectral order on the CCD has a width set by the size of the stellar image, set by the atmospheric seeing during the exposure. Thus, for each image we had to determine, to within one pixel, the locations of the ridge and the width of each spectral order containing the starlight. We aim to analyze only the ``sky-illuminated'' portions of the CCD that have no stellar light but may contain laser light. We ignore, of course, those CCD pixels between the orders on which no ``sky'' light falls, as no laser light could hit them either. That is, the slit image does not extend over the entire CCD region between each order for all wavelengths longward of 5500 \AA. To locate the ridge and width of the orders containing stellar light we first perform a 3x3 median filter on all three CCD images. This removes cosmic rays \footnote{Cosmic rays usually hit in one or a few pixels, and only sometimes do they raise the counts in more than 4 pixels within a 3x3 region. Note that the 3x3 median filter was used exclusively for the location of the stellar ridge, and was not retained for the subsequent pixel-by-pixel laser search.}, ensuring that the ridge of the stellar spectrum has the most photon counts along the order. To find the ridge of each order we bin each set of 10 columns and identify the row having the maximum value. The locations of the orders vary by less than a pixel from observation to observation due to our consistent optical set-up of the gratings in the HIRES spectrometer and our accurate guiding of the star image on the entrance slit. We perform a constrained linear interpolation between these ridge locations to establish the approximate location of each spectral order. We avoid searching for laser lines within $\pm 1.5 \times {\rm FWHM}_{spatial}$ from the ridge of the stellar order, to avoid contamination from stellar continuum light. Figure \ref{fig:overlap} shows the geometry of the orders and the region between them. Shortward of approximately 5500 \AA, the slit images of the starlight overlap between the orders. We use our determination of the spatial profile to anticipate such regions that must be avoided in the search for laser lines. There simply are no ``sky'' pixels in between non-overlapping orders , so these regions of the CCD were ignored in the search for laser lines. For spectral orders close enough spatially to each other that they have overlapping ``sky'', we do search for laser lines in that sky region, as shown in Figure \ref{fig:overlap}. In such cases, we cannot determine which order and which of the two corresponding wavelengths is associated with any laser line we may detect. \subsection{Telluric Lines and Artifacts} Terrestrial nightglow emission lines from the night sky (such as [OI] 5577.3 and 6300.3 \AA) are too bright to permit detection of laser lines and they can fool our algorithms designed to search for laser lines, especially on their edges. We also ignore a large artifact in the raw HIRES images, a diagonal stripe that runs through the center of the middle CCD known colloquially as the ``Meteor'' \footnote{the Meteor is actually scattered light dispersed twice by the spectrograph \citep{HIRESDATA}}. On the other hand, faint night sky emission lines often are sufficiently bright to fool our laser-line searching algorithms, producing an apparent signal-to-noise ratio and partial shapes at their ends ($\pm 7$\rq\rq{} from the center of the stellar spectrum). Such faint night sky lines often meet the PSF criterion in the wavelength dimension, and can marginally match the criterion in one spatial direction, at the spatial edge of night sky line. This, coupled with Poisson fluctuations in the brightness and in the background count levels, can cause the ends of night sky lines to be identified erroneously as candidate laser lines. We ameliorate this problem in two ways. First, we compare pixel locations to a catalog of OH and O$_2$ night sky lines at Mauna Kea \citep{Osterbrock1996}. In order to not miss any, we convolve the CCD images with a 2-D Gabor filter oriented in the direction of the night sky lines, and omit results from wavelength values with large spatially oriented signal. The latter technique is necessary, as the laser search pipeline is sensitive enough to pick out faint night sky lines, and not all lines are present in all observations. \section{Identifying Laser Lines by S/N and $\chi^2$} \label{sec:algorithm} To detect candidate laser emission lines in the ``sky'' region of the CCD adjacent the stellar spectrum we constructed a routine that steps pixel by pixel across all three CCDs searching for emission with the requisite PSF properties described in Section~\ref{sec:method}. We construct ``postage stamps" with typical sizes of 11 x 19 pixels (spatially and in the wavelength direction, respectively) centered on each pixel, and we search for emission that meets specified criteria within each stamp. A representative postage stamp on the CCD is shown in Figure \ref{fig:interior}. We describe below the two null hypothesis ``gates'' by which we rule out prospective laser emission lines, involving a requisite S/N ratio for the emission and goodness-of-fit criterion for its PSF shape. The first gate to be passed by the pixel and its postage-stamp neighboring pixels tests the signal-to-noise ratio (S/N) of the prospective laser line. We establish a S/N threshold such that fluctuations in the arrival of background photons are unlikely to rise to a high S/N threshold. The major sources of ``counts'' in a pixel are bias, dark counts, readout noise, scattered light in the spectrometer, background sky light, and laser emission, if any. The bias, dark counts, and readout noise are easily determined, in total, for each postage stamp by measuring the observed fluctuations. This ``noise'' is measured by the RMS of the counts in the pixels around the perimeter of each postage stamp, computed as follows: \begin{equation} RMS = \sqrt{\frac{\sum\limits_{i=1}^{N_{per}} (p_i-\mu)^2}{N_{per}}} , \label{eq:rms} \end{equation} where the $p_i$ is the number of photons in the \it{i}\rm th pixel of the perimeter region, $N_{per}$ is the number of pixels in the perimeter region (typically 100-200 pixels, see Figure \ref{fig:interior}), and $\mu$ is the mean value of the number of photons in these pixels. We define $M$ to be the median value of the number of photons in the perimeter region. We adopt the RMS in Eqn(1) as the noise in each pixel contributed by the background fluctuations, bias, dark, readout noise, scattered light in the spectrometer, and sky brightness. The typical RMS is 2-3 photons per pixel. This background noise is adopted for each pixel within the signal region, the interior of the postage stamp, defined as the pixels in the model PSF containing 95\% of the counts. The number of pixels in the signal region ranges from 11 to 25 pixels depending on seeing. The total counts above the background in the signal region are given by \begin{equation} I=\sum\limits_{i=1}^{N_{\rm pix}} (S_i-M) , \end{equation} where the $S_i$ number of photons in the \it{i}\rm th pixel of the signal region, $N_{\rm pix}$ is the number of pixels in the signal region (typically 11-25 pixels). We define a metric of the signal-to-noise ratio, ``S/N'', of the laser emission signal as, \begin{equation} S/N = \frac{I}{\sqrt{N_{pix}\times RMS^2+I}} , \label{eq:s/n} \end{equation} with the $RMS$ and $N_{\rm pix}$ defined as above. Hence our S/N is the ratio of the total number of photons above background within the signal region to the quadrature-summed noise expected in the signal region, which has contributions from background fluctuations in each pixel and from Poisson fluctuations in the signal. This is not a precise signal-to-noise ratio, as we assume Gaussian distributed background fluctuations. As the total number of counts of the laser grows, loss of precision from the Gaussian background assumption is quickly dwarfed by poisson fluctuations in the signal. The goal is not to compute an accurate signal-to-noise ratio of the prospective laser emission signal. Rather it is to compute a quantity that is sufficiently close to the signal-to-noise to identify and rank-order prospective laser emission signals based on their standing above the noise, motivating follow-up analysis and observations to assess reality of the signal. We establish a second metric for the laser emission based on a goodness-of-fit to the known 2-D PSF. We compute a reduced $\chi^2_{r}$ by fitting the photon counts within the postage stamp with the previously determined 2-D PSF, serving as the model, with the only free parameters being the position of the PSF. We use the same pixels in the postage stamp that were used to compute the S/N. We scale our model PSF to have the same total photon counts as observed in the postage stamp image. When computing reduced $\chi^{2}_{r}$, we allow only the position of the PSF model to adjust to the observed photon counts in their respective pixels. We compute the reduced $\chi^{2}_{r}$ as \begin{equation} \chi^{2}_{r}=\frac{1}{\nu}\sum\limits_{i=1}^{N_{pix}} \frac{(S_i - E_i)^2}{\sigma_{i}^{2}} \label{eq:chisquare} \end{equation} where $S_i, E_i,$ and $\sigma_{i}$ are the photon counts in the $i$th pixel of postage stamp region and of the PSF model, respectively, and the associated uncertainty for each pixel. Here, $\nu$ denotes the number of degrees of freedom, in this case the number of pixels used in the sum, minus three fitted parameters: the x-y position of the 2-D PSF model and the total number of photon counts. Subpixel sampling would have allowed for greater precision, but was not employed in this algorithm. The reduced $\chi^2_r$ statistic is a sufficient goodness-of-fit metric for this application, as the PSF model is locally linear in the pixel location \citep{Andrae2010}. \subsection{Setting Thresholds in S/N and $\chi^{2}$} \label{sec:testing} We adopt the detection criteria for laser emission by demanding that the S/N be greater than some threshold and that $\chi^{2}_{r}$ for the fit to the 2-D PSF be near unity, below some threshold. That is, we test the null hypothesis that there is no convincing laser emission within each postage stamp. The null hypothesis can be rejected if the S/N resides above some threshold and the $\chi^2_r$ statistic is less than some threshold, to be determined as follows. To establish thresholds for S/N and $\chi^{2}_{r}$, we tested the ability of our code to pick out simulated laser lines over the entire format under many conditions. We adopted a strategy of injection and recovery of laser emission. We randomly selected actual observations and inserted synthetic laser lines composed of the 2-D PSF with superimposed Poisson fluctuations. We injected fake laser emission ``dots'' having varying S/N, different locations within the echelle spectrum format on the CCD, and modest (10\%) variations in the 2-D PSF to simulate the actual variability of the PSF from that adopted. These injected fake laser emission 2-D PSFs revealed the distribution of values of S/N and $\chi^2_r$ that the code detected. We also learned the number of false positives per image that results from Poisson fluctuations. We found that our algorithm incurred a steep rise in false positives for S/N less than 7, which corresponds to $\sim$100 photons in the laser signal, after including noise from the background. Nearly one false positive per image occurred with S/N at 7 or greater, clearly indicating that we could not drop the S/N threshold to lower values without incurring a rapid increase in false positives. Injecting fake laser emission right at this threshold of S/N=7, our tests showed 99\% of simulated lasers had $\chi^{2}_{r} \in [0.7,2.0]$, and we had a false positive rate of approximately 0.1 per observation, with each observation containing $~3\times 10^6$ pixels considered as the center of the laser line \footnote{To those familiar with signal-to-noise thresholding, a false positive rate of 0.1 per 3 million observations seems very high for signal-to-noise of 7. This is an effect of the actual noise variance of all sources of noise within a postage stamp region, captured by the noise measure around the perimeter given in Equation \ref{eq:rms}.}. We tested the effect of the code miscalibrating the 2-D PSF, and found only a small rise in $\chi^{2}_{r}$ (see Table \ref{chired}). Changing the simulated 2-D laser profile by as much as 30\% in both dimensions still maintained, on average, $\chi^{2}_{r}$ below 2, for S/N below 10. Similar tolerance was seen for simulated lasers centered with sub-pixel precision. The worst case was a simulated laser situated at the intersection of four pixels, in which cases the mean in $\chi^2_r$ rose no higher than 1.2 for S/N below 10. The discrepancies in $\chi^2_r$ caused by a miscalibrated 2-D PSF and from untreated sub-pixel displacements in the laser line grow with increasing S/N. A laser line containing hundreds or thousands of photons will have small associated fractional Poisson errors compared to weak laser lines. Any mismatch between the actual 2-D PSF and the adopted model 2-D PSF can cause the value of reduced $\chi^2_{r}$ to be much greater than unity simply due to the adoption of a model PSF different from the actual PSF. We accounted for such errors in the model PSF by artificially increasing the adopted noise (above Poisson) associated with the number of photons per pixel. We set a lower bound on the uncertainty in the noise at 20\% of the number of detected photons per pixel. Thus, in equation \ref{eq:chisquare}, we have $\sigma_i$ which is the greater of $\sqrt{(0.2 \cdot S_i)^2 +RMS^2}$ and $\sqrt{S_i +RMS^2})$. While approximate this floor in the uncertainty ensured that even an observed laser line having twice the FWHM in both the spatial and wavelength directions as the adopted PSF model along with $\rm S/N = 100$ would be detected. For laser signals exceeding $\rm S/N = 10$, we set a new $\chi^{2}_{r}$ threshold of $\chi^{2}_{r} < 10$. While this threshold is high in light of the lower bound on $\sigma_i$, the relative paucity of postage stamps with $\rm S/N > 10$ resulted in a manageable number of candidates. The advantage is that prospective laser emission lines containing hundreds or thousands of photons will be identified by our search algorithm, even if the FWHM of the PSF model is wrong by a factor of two. That is, strong laser lines will be detected independent of the integrity of the 2-D PSF. \section{Results} \label{sec:results} Using the algorithm described in Section~\ref{sec:algorithm}, we searched 2796 target stars for laser emission in the wavelength range, 3640-7890 \AA \, located in the ``sky'' region between 2-7 arcsec from the star image along the entrance slit. Two criteria defined a detection of a candidate laser line as described in Section~\ref{sec:algorithm}. The signal-to-noise of the photons (above background noise sources) had to meet the threshold, S/N$>$7.0, corresponding to $\sim$100 photons collected within the 2-D PSF of the Keck-HIRES spectrometer. Also, the goodness-of-fit to the 2-D PSF in the wavelength and spatial direction had to meet $\chi^{2}_{r} <$ 2.5. Using these thresholds, we found 10,155 candidate laser emission lines with S/N between 7 and 10, and another 5449 candidates with S/N above 10. Of those with S/N $<$ 10, 3570 candidates had $\chi^{2}_{r} < 2.0$ indicating an acceptable match with the 2-D PSF. The ensemble of 15,604 laser-line candidates were subsequently analyzed by eye to rule out those clearly inconsistent with a 2-D PSF shape. We rejected candidate laser lines that were clearly caused by instrumental artifacts such as from the location of the laser line at the edge of a spectral order or the CCD detector, flaws in the optics of the spectrometer (i.e. internal reflections, 2nd-order light from cross disperser), flaws in the CCD detector (``hot'' pixels), or bleeding of charge on the CCD. We also rejected candidate laser lines as false positives due to atmospheric effects such as night sky emission or to astrophysical effects (spatially extended nebulae). This rejection process left a mere eight candidate lasers that survived, all being consistent with the 2-D PSF and thus could be true laser emission from unresolved sources. Each of these eight surviving laser candidates was analyzed again, even more carefully by visually inspecting the raw CCD image. We looked for patterns of similar signals that were scattered spatially in some linear or periodic fashion, either among neighboring spectral orders or along the wavelength direction (within a few hundred pixels). Such regular patterns are caused by internal reflections or ``ghosts'' within the spectrometer (see Figure 4). These instrumental effects can change from night to night due to slight repositioning of the optics. Only by viewing the candidate laser line with several levels of magnification (number of pixels in field of view) was it possible to see the instrumental pattern. In the case of HIP94931, of which we have more than 160 spectra, nearly PSF-shaped candidates appeared and disappeared from observation to observation, and only by noting a set of collinear faint dots from order to order was it possible to rule them out as candidate lasers (see Figure \ref{fig:hip}). This is an example of one of the ``ghosts'' described above. In addition, though the area between the orders was being ignored, in a few cases the second order UV light from the cross disperser fell right between the orders in the red CCD (at exactly 2x the wavelength). Our code was fooled by such 2nd-order light in a few cases, as shown in Figure \ref{fig:hip}. We ruled out seven of the eight laser candidates as artifacts, described above. Also, for seven of the eight stars, we had obtained multiple spectra. This allowed us to examine the same pixel location (i.e. wavelength) to see if the laser emission occurred in earlier or later exposures. Such examinations allowed us to identify artifacts that were instrumental, even if recurring. Finally, we also ruled out one laser candidate that was simply a spectrum of a known planetary nebula angularly nearby the star being observed. Thus, none of the eight candidates survived this more careful examination. \it Therefore, our search of 14,380 spectra of 2796 different stars did not reveal any convincing evidence of laser emission between 2 and 7 arcsec from the target star.\rm \section{Detection Thresholds of Laser Emission} \label{sec:sens} We now translate our detection thresholds of laser lines to the photon fluxes at Earth, and to the luminosities of the lasers themselves, that would have been detected. Our imposed threshold of S/N$>$7 sets the limit on the strengths of the signals we deem candidate laser emission. We use Equation \ref{eq:s/n} to compute the S/N ratio of the prospective laser signal, accounting for both the Poisson fluctuations in the candidate laser photons and the fluctuations in the background noise. This S/N threshold of 7 requires that $\sim$100 photons from the laser ($I$ in Equation \ref{eq:s/n}) be acquired during the exposure to be deemed a candidate laser signal. The value of 100 photons varies by $\approx10\%$ in different spectral regions and different observations, depending on the specific background noise level from scattered light and the sky. We compute the corresponding laser flux at the Earth by considering the effective collecting area of the Keck 1 telescope, 76 m$^2$, and the efficiency of the telescope and HIRES spectrometer, 5\%, which depends on the seeing and includes photon losses in the telescope optics, the spectrometer, the entrance slit to the spectrometer, and the quantum efficiency of the CCD detector \citep{Griest2010, Vogt1994}. Thus, to detect the threshold 100 photons at the CCD detector requires that 26.3 photons per square meter fall on the primary mirror during the exposure. We now consider exposure times that represent those used to obtain the spectra in this study, ranging from 1 min (for Vmag = 7) to 45 min (for Vmag = 13). (Note that the S/N per pixel in the stellar spectrum achieved at Vmag = 13 is less than that achieved at Vmag=7.) For exposures of 1 min, the threshold photon flux from the laser required to detect 100 photons is 0.44 photons m$^{-2}$ s$^{-1}$. For exposures of 45 min, the threshold photon flux to achieve 100 detected photons is 0.010 m$^{-2}$ s$^{-1}$, a remarkably low flux. One may easily write these photon flux thresholds, $F$, in terms of the power of the laser, $P$, the laser light frequency, $\nu$, and the distance, $d$, from the laser to Earth: \begin{equation} F = \frac{P}{\pi h \nu \big[\theta d/2 \big]^2} \end{equation} Here, $\theta$ is the full opening angle of the diverging laser beam (in radians). As a benchmark example, we consider ``Keck-to-Keck'' laser transmission and reception. We consider a 10m diameter, diffraction-limited laser transmitter (in vacuum, outside any atmosphere) detected by the Keck 10m telescope and HIRES spectrometer. We adopt the normal Raleigh criterion for the beam divergence angle, $\theta = 1.22 \lambda/D$. Here, $\lambda$ is the laser wavelength and $D$ is the diameter of the diffraction-limited laser emitter. A diffraction-limited laser itself has a narrower beam ``waist'' leading to an intensity pattern having a characteristic angular beam size given by $\theta = (2/\pi) \lambda/D$. For $\lambda$ = 550 nm near the middle of our wavelength sensitivity, and a 10m emitting aperture, the beam divergence angle is $ 0\farcs014$. It is worth noting that such a laser concentrates its power into a beam so narrow that its flux at Earth is 10$^{15}\times$ brighter than an isotropic source of the same power and distance. Moreover, its monochromaticity delivers that energy within a narrow wavelength range, $\sim$10$^{-4}$ of the wavelength width of the typical stellar spectral energy distribution. Thus, laser transmission from a 10-meter aperture achieves a detectability boost over a stellar spectrum of a factor of 10$^{19}$. For example, as Sun-like stars have luminosities of 3.8 $\times 10^{26}$W, a diffraction-limited laser with a power of only 4$\times$10$^7$W will deliver a light flux (at its wavelength) outshining the host star. But, the laser must be aimed at the Earth. To continue the benchmark example, we adopt laser power similar to that of existing laser guide star systems, which is 7 W at the Keck Observatory (albeit at a wavelength of 588.995) nm. We consider a benchmark distance of 10 ly, representative of the nearest dozen stars. A 60s integration time (typical of our exposures of nearby stars) yields $\sim$160 photons detected by our CCD, implying a S/N ratio of 9.0-11.5 depending on seeing and local background noise RMS. Thus, we would detect a 7 W laser located at a distance of 10 light years, as it would sit well above our detection threshold of S/N=7. As mentioned in Section \ref{sec:HIRES_setup}, the exposure times of the two major populations of target stars depended on visual magnitude. For stars brighter than Vmag = 10, we stop exposures when the exposure meter attains 250,000 ``counts'' (on an arbitrary scale), representative of the nearby star target population. We stop exposures for fainter stars, such as the {\it Kepler} stars, when the exposure meter attains 60,000 counts. As the majority of targets are main sequence FGKM stars, the laser power from {\em Kepler} stars would need to be about four times as high as from the nearby stars to be detected, i.e., the greater (45 min) exposure times for the {\it Kepler} stars is still $\sim$4x too small to make up for their greater distance. We consider here the laser power required to reach a threshold S/N ratio of 7, requiring 100 photons as previously discussed, for the two representative populations of stars. The average nearby star in our survey resides at a distance of typically 30 pc = 100 ly and the typical exposure time is 5 min. For these nearby stars, the required laser power for a threshold detection is $\sim$90 W. The {\em Kepler} target stars reside at a distance of $\sim$300 pc and the typical exposure time is 45 min. Thus for the {\it Kepler} stars, the required laser power for a threshold detection is $\sim$1 kW. Our initial pass at detecting laser lines suffered from an upper bound in detectability at a S/N ratio of 400, due to saturation of the CCD in some pixels at this exposure level. We overcame this by carrying out a second pass that permitting any high photon level, including saturation. No such saturated laser line candidates were found. \section{Discussion} \label{sec:Disc} The power, beam dispersion, and distance to a continuous-wave laser determine its detectability with the HIRES spectrometer on the Keck 1 telescope for the 2796 stars observed here. A transmitting extraterrestrial civilization could produce a detectable laser signal by any combination of the size of their diffraction-limited optics and the power of their laser. Narrower laser beams concentrate their intensity but obviously require more precise pointing for our telescopes to intercept the beam, and laser tracking precision becomes more important. The Keck-to-Keck case considered above demands laser pointing accuracy of $\sim$10 milliarcseconds, set by the diffraction-limited beam size, a pointing accuracy achieved with current spaceborne telescopes and avionics. Advanced civilizations could presumably have similar, if not better, pointing ability and laser technology. Our Keck-HIRES experiment achieves sensitivity to remarkably low photon fluxes of 0.4 m$^{-2}$ s$^{-1}$ in a 1 minute exposure and proportionally lower thresholds for longer exposures. There are many configurations of the laser source parameters that would allow for detectability at Keck with required laser power at kilowatt levels from distances over 1000 light years. At such distances, extinction of optical light due to interstellar dust is only a few tenths of a magnitude in V band, decreasing the laser fluxes at Earth by no more than 50\%. Thus extraterrestrial lasers of kilowatt power easily permit detection from distances of 1000 light years. Such kW laser power is routinely achieved with present technology. The Keck 1 Laser Guide Star AO system has a power of 7 W, only an order of magnitude too weak to be detectable at the typical distances of the nearest 1000 stars, even if the beam width were diffraction limited. But many current lasers are far more powerful, easily detectable if emitted from the distances of our 2796 target stars. Commercially available YAG (yttrium aluminium garnet) lasers have a power of 125 W operating at a wavelength of 532 nm, easily detected by Keck-to-Keck transmission and reception \citep{Laserfabrik}. Similarly, the US Navy has deployed 100 kW solid-state chemical iodine lasers working in the near-IR for use on combat vessels, which, while not in the wavelength band explored here, would be detectable at Earth with even modest beam width and pointing accuracy\citep{lockheed2014}. Considering another civilization's technical advancement relative to our own during just 100-1000 years, the required laser power of kW levels seems within expectation. We may compare our current technique for laser detection with the past searches for optical pulsed-lasers (OSETI). Such searches are sensitive to very short pulses consisting of several photons during a nanosecond pulse time scale. Their domain of strength over our method is in the detection of isolated optical laser pulses that are briefly (during a few ns or $\mu$s) brighter than the host star. As a modern example, the proposed NIROSETI experiment \citep{Maire2014, Wright2014_S} at the Lick Observatory 1-meter Nickel telescope will be sensitive to 40 photons in the near-IR per square meter arriving in a pulse of width 0.5 ns. At Keck, a single such pulse falling on the Keck telescope during an exposure yields $\sim$150 photons incident on the HIRES CCD detector, a greater number of photons than with NIROSETI due to the large aperture of the Keck telescope. The $\sim$150 photons from just one ns pulse is easily detectable with our method, provided the signal does not have to compete with the star's flux. If the telescope picks up many such pulses, contained perhaps in a train, the laser signal would be even more detectable with our method. In a five minute exposure, pulse cadences of Hz, kHz, and MHz would strengthen the signal as detected by our method, due to the multitude of photons. One caveat in our work bears emphasis. Our examination of over 2796 stars for unresolved laser emission was done by avoiding examination of the sky region within 2 arcsec of the star itself. We explicitly searched for laser signals that were resolved spatially from the target star, given the typical seeing profile having FWHM of $\sim$1 arcsec. The actual separation necessary to resolve the laser line ranged from approximately 5 pixels (2 arcsec) from the center of the stellar spectrum, to as many as 10 or 12 pixels (3-4 arcsec) on nights of exceptionally poor seeing. For each spectrum of each star, the spatial width of the spectrum determines the closest angular separation from the star, typically 2-3 arcsec. Clearly this ``inner working angle'' of 2-3 arcsec constrains the physical distance of any detectable lasers from the target star, typically 10-100 AU for the nearest stars. Similarly we can detect lasers located 2000-7000 AU from the {\it Kepler} stars. This restriction on the distance of the laser from the star bears directly on the 10 light-year Keck-to-Keck thought experiment described above. An Earth-analog planet hosting a powerful laser located 1 AU from any target star would reside angularly within the inner working angle of all 2796 of our target stars, leaving such lasers undetected in this current search. In effect, this current search is sensitive to lasers that are associated with technological constructs located many AUs from a star. In a following paper, we will report on a search for laser emission coming from {\em within} the current 2 arcsec inner working angle, i.e. spatially coincident with the stars themselves. The corresponding flux thresholds for detection of lasers will be higher (worse) by roughly two orders of magnitude due to the Poisson fluctuations of the $\sim$10$^4$ photons per pixel. Similar searches of stellar spectra for laser emission can also be carried out on other existing spectroscopic surveys, such as the Sloan Digital Sky Survey. The technique will work equally well in the infrared, with the added bonus of lessened extinction from dust in the Galaxy. Such extant spectroscopy surveys may lack the resolution to demonstrate the monochromatic nature of the laser emission. However, follow-up observations of compelling candidates could be performed with high resolution spectroscopy to support or reject the interpretation of laser emission. A robust toolset able to search for laser lines in a variety of different telescope spectra would be a useful addition to the SETI community at large. \section{Acknowledgments } \indent We thank the John Templeton Foundation for funding this research. We thank John Johnson and Andrew Howard for the Keck Telescope time awarded to CalTech and to the University of Hawaii and the Institute for Astronomy (IfA) allowing many of these spectra to be obtained. We thank Howard Isaacson for enormous help with the observations, reduction, and organizing target lists. G. Marcy, the Alberts Chair at UC Berkeley, would like to thank Marilyn and Watson Alberts for funding and support that made this research possible. The spectra presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The University of California Observatories (UCO) provided key support of HIRES and other instrumentation at the Keck Observatory. We thank the state of California and NASA for funding much of the operating costs of the Keck Observatory. The construction of the Keck Observatory was made possible by the generous financial support of the W. M. Keck Foundation. We thank the many observers who contributed to the measurements reported here. We gratefully acknowledge the efforts and dedication of the Keck Observatory staff, especially Scott Dahm, Hien Tran, Grant Hill and Greg Doppmann for support of HIRES, and Greg Wirth, Bob Goodrich, and Bob Kibrick for support of remote observing. This work made use of the SIMBAD database (operated at CDS, Strasbourg, France) and NASA's Astrophysics Data System Bibliographic Services. This research has made use of the Kepler Community Follow-up Observing Program (CFOP) database and the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. The authors wish to extend special thanks to those of Hawai`ian ancestry on whose sacred mountain of Mauna Kea we are privileged to be guests. Without their generous hospitality, the Keck observations presented herein would not have been possible. We thank Dan Werthimer, Andrew Siemion, Jill Tarter, Frank Drake, Jason Wright, Lucianne Walkowicz, Shelley Wright, John Gertz, Andy Fraknoi, David Brin, Charlie Townes, Mike Garrett, Amy Reines, and Phil Lubin for valuable conversations about optical SETI. We also thank the ``Berkeley SETI Research Center" (BSRC) and the ``Foundation for Investing in Research on SETI Science and Technology'' (FIRSST) for ideas and support toward the future of SETI research. \bibliographystyle{apj}
{ "timestamp": "2015-04-15T02:01:48", "yymm": "1504", "arxiv_id": "1504.03369", "language": "en", "url": "https://arxiv.org/abs/1504.03369" }
\section{Introduction} As the title suggests, the goal of this paper is to report on some aspects of certain problems in the sub-Riemannian CR and quaternionic contact (QC) geometries. It seems appropriate in lieu of an extensive Introduction to begin with a section about the corresponding problems in the Riemannian case. Besides an introduction to the discussed problems we give key steps of the proofs of some well known results highlighting ideas which can be used, although with a considerable amount of extra analysis in the sub-Riemannian setting. In the later sections we show the difficulties and current state of the art in the corresponding results on CR and quaternionic contact manifolds. However, this article is not designed to be a complete survey of the subjects, especially in the case of the Yamabe problem, but rather a collection of particular results with which we have been involved directly while giving references to important works in the area, some of which are covered in this volume. \begin{conv}\label{convention} A convention due to traditions: when considering eigenvalue problems, it is more convenient to use the non-negative (sub-)Laplacian. Correspondingly, $\triangle u=-tr^g(\nabla^2 u)$ for a function $u$ and metric $g$. On the other hand the (sub-)Laplacian appearing in the Yamabe problem is the "usual" negative (sub-)Laplacian $\triangle u=tr^g(\nabla^2 u)$. \end{conv} \begin{comment} \end{comment} \section{Background - The Riemannian problems} The only new result here is Proposition \ref{p:Obata Einstein}. This fact is exploited later in a new simplified proof of the Obata type theorem in the qc-setting. Interestingly, the CR case presents another type of behavior. \subsection{The Lichnerowicz and Obata first eigenvalue theorems}\label{ss:Riem Lich&Obata} The relation between the spectrum of the Laplacian and geometric quantities has been a topic of continued interest. One such relation was given by Lichnerowcz \cite{Li} who showed that on a compact Riemannian manifold $(M,g)$ of dimension $n$ for which the Ricci curvature satisfies Ric(X,X)\geq (n-1)g(X,X) $ the first positive eigenvalue $\lambda_1$ of the (positive) Laplace operator $\triangle f=-tr^g\nabla^2f$ satisfies the inequality $% \lambda_1\geq n.$ Here $\nabla$ is the Levi-Civita connection of $g$. In particular, $n$ is the smallest eigenvalue of the Laplacian on compact Einstein spaces of scalar curvature equal to $n(n-1)$-the scalar curvature of the round unit sphere. Subsequently, Obata \cite{O3} proved that equality is achieved iff the Riemannian manifold is isometric to the round unit sphere. It should be noted that the smallest possible value $n$ is achieved on the round unit sphere by the restrictions of the linear functions to the unit sphere (spherical harmonics of degree one), which give the associated eigenspace. Later, Gallot \cite{Gallot79} generalized these results to statements involving the higher eigenvalues and corresponding eigenfunctions of the Laplace operator. The above described results of Lichnerowicz and Obata we want to discuss in detail are summarized in the next theorem. \begin{thrm}\label{t:Riem LichObata} Suppose $(M,g)$ is a compact Riemannian manifold of dimension $n$ which satisfies a positive lower Ricci bound \begin{equation}\label{e:Ricci lower bound} Ric(X,X)\geq (n-1)g(X,X). \end{equation} \begin{enumerate}[a)] \item If $\lambda$ is a non-zero eigenvalue of the (positive) Laplacian, $\triangle f=-\lambda f$, then $\lambda \geq n$, see \cite{Li}. \item if there is $\lambda =n$, then $(M,g)$ is isometric with the round sphere $S^n(1)$, see \cite{Ob} \end{enumerate} \end{thrm} Let us briefly sketch the proof of Theorem \ref{t:Riem LichObata} including a new observation, Proposition \ref{p:Obata Einstein}, which will be exploited in the sub-Riemannian setting. The key to Lichnerowicz' inequality is Bochner's identity ($\triangle\geq 0)$, \begin{equation}\label{e:RBochner} -\frac12\triangle |\nabla f|^2=|\nabla df|^2-g(\nabla(\triangle f),\nabla f)+Ric(\nabla f,\nabla f). \end{equation} After an integration over the compact manifold we find \[ 0=\int_M |(\nabla df)_0|^2+\frac {1}{n}(\triangle f)^2-g(\nabla(\triangle f),\nabla f)+Ric(\nabla f,\nabla f)\, {dvol_g}. \] Let us assume at this point the inequality $Ric(\nabla f,\nabla f)\geq (n-1)|\nabla f|^2$ for any eigenfunction $f$, $\triangle f=\lambda f$. We obtain then the inequality \begin{multline*}{0=\int_M |(\nabla df)_0|^2+\frac {1}{n}\lambda|\nabla f|^2-\lambda|\nabla f|^2+Ric(\nabla f,\nabla f)\, {dvol_g}}\\ { = \int_M |(\nabla df)_0|^2\, {dvol_g} +\int_M Ric(\nabla f,\nabla f) -\frac {n-1}{n}\lambda|\nabla f|^2\, {dvol_g}}\\ { \geq \int_M |(\nabla df)_0|^2 \, {dvol_g} +\frac {n-1}{n}\int_M (n-\lambda)|\nabla f|^2\, {dvol_g}.} \end{multline*} { Hence $(\nabla df)_0=0$ and $0\geq n-\lambda$}, which proves Lichnerowicz' estimate. Furthermore, if the lowest possible eigenvalue is achieved then the trace-free part of the Riemannian Hessian of an eigenfunction $f$ with eigenvalue $\lambda=n$ vanishes, i.e., it satisfies the system \begin{equation} \label{e:Riem Hess eqn} \nabla^2 f = -fg. \end{equation} Obata's result which describes the case of equality was preceded by several results where the case of equality was characterized under the additional assumption that $g$ is Einstein \cite{YaNa59} or has constant scalar curvature \cite{IshTa59}. It turns out that besides Obata's proof these assumptions can also be removed as we found in Proposition \ref{p:Obata Einstein}. Nevertheless, even under the assumption that $g$ is Einstein the proof that $(M,g)=S^n$ requires further delicate analysis involving geodesics and the distance function from a point. Furthermore, Obata showed in fact a more general result, namely, on a \emph{complete} Riemannian manifold $(M,g)$ equation \eqref{e:Riem Hess eqn} above allows a non-constant solution iff the manifold is isometric to the round unit sphere $S^n$. \begin{rmrk}\label{r:sphere charcat} A good reference for Hessian equations characterizing the spaces of constant curvature is \cite{Kuh88}. For example, if $(M,g)$ is compact Riemannian manifold admitting a non-constant solution to $\nabla^2 f = \frac {\triangle f}{n} g$ then $(M,g)$ is conformally diffeomorphic to the unit round sphere. Furthermore, if the scalar curvature of $(M,g)$ is constant then $(M,g)$ is isometric to a Euclidean sphere of certain radius. \end{rmrk} Thanks to the Bonnet-Myers and S.-Y. Cheng's improved Toponogov theorems we can sketch the proof of this fact as described in details in \cite[Chapter III.4]{Chav84}. First, we note that assuming $(M,g)$ is complete and satisfies \eqref{e:Ricci lower bound} we have \begin{enumerate}[(i)] \item (Bonnet-Myers) $M$ is compact, the diameter $d(M)\leq \pi$ and $\pi_1(M)$ is finite; \item (improved Toponogov theorem) $d(M)= \pi$ iff $M$ is isometric to $S^n(1)$, \cite{Cheng75}. \end{enumerate} The Hessian equation \eqref{e:Riem Hess eqn} implies that if $\gamma(t)$ is a unit speed geodesics we have $(f\circ\gamma)''+f\circ\gamma=0$, hence $f(\gamma(t))=A\cos t +B\sin t$ for some constants $A$ and $B$. Let $p\in M$ be such that $f(p)=\max_M f$ which exists since $M$ is compact. For any unit tangent vector $\xi\in T_p(M)$ the unit speed geodesic $\gamma_\xi(t)$ from $p$ in the direction of $\xi$ satisfies $f(\gamma_\xi(t))=f(p)\cos t$ sinc the derivative at $t=0$ is zero. Therefore, $f(\gamma_\xi(t))$ is injective for $0\leq t\leq \pi$ which implies $d(M)\geq \pi$. This shows that $d(M)=\pi$ and by Cheng's theorem we conclude $M=S^n$. \begin{rmrk} We remark explicitly that the above approach to Obata's theorem cannot be used in sub-Riemannian setting {s where} both (i) and (ii) are very challenging open problems with the exception of some results generalizing (i) in some special cases, see Section \ref{ss:sub-Riemannian compariosn note}. \end{rmrk} We turn to our result mentioned in context above. \begin{prop}\label{p:Obata Einstein} Suppose $(M,g)$ is a compact Riemannian manifold of dimension $n$ which satisfies \eqref{e:Ricci lower bound}. If the lowest possible eigenvalue is achieved, $\triangle f =nf$ for some function $f$, then $(M,g)$ is an Einstein space. \end{prop} \begin{proof} The proof follows from several calculations and a use of the divergence formula. By the proof of Lichnerowicz' estimate the eigenfunction $f$ satisfies \eqref{e:Riem Hess eqn}. Differentiating \eqref{e:Riem Hess eqn} and using Ricci's identity $\nabla^3f(X,Y,Z)-\nabla^3f(Y,X,Z)=-R(X,Y,Z,\nabla f)$ we find the next formula for the curvature tensor \begin{equation}\label{e:Obata Einstein 1} R(X,Y,Z,\nabla f)=df(X)g(Y,Z)-df(Y)g(X,Z). \end{equation} Taking a trace in the above formula we see \begin{equation}\label{e:Obata Einstein 2} Ric(X,\nabla f)=(n-1)df(X). \end{equation} A differentiation of \eqref{e:Obata Einstein 2} and another use of \eqref{e:Riem Hess eqn} gives \begin{equation}\label{e:Obata Einstein 4} (\nabla_Z Ric)(Y,\nabla f)=fRic(X,Y)-(n-10fg(X,Y). \end{equation} On the other hand, taking the covariant derivative of \eqref{e:Obata Einstein 1} and then using \eqref{e:Riem Hess eqn} for $\nabla_V(\nabla f)$, we obtain \[ (\nabla_V R)(Z,X,Y,\nabla f)=fR(Z,X,Y,V)-fg(V,Z)g(X,Y)+fg(V,X)g(Z,Y). \] Therefore, taking a trace, it follows. \begin{equation}\label{e:Obata Einstein 5} (\nabla^*R)=Y,X,\nabla f)=fRic(X,Y)-(n-1)fg(X,Y). \end{equation} A substitution of \eqref{e:Obata Einstein 5} in the formula $(\nabla_Z Ric_0)=(\nabla_Z Ric)(X,Y)-\frac {1}{n}dS(Z)g(X,Y)$ with $Z=\nabla f$ gives the key identity \begin{multline}\label{e:Obata Einstein 6} (\nabla_{\nabla f} Ric_0) (X,Y)=2f Ric_0(X,Y) -\frac {2S}{n}g(X,Y)-2(n-1)fg(X,Y)-\frac {1}{n}dS(\nabla f)g(X,Y). \end{multline} Hence, $L_{\nabla f}|Ric_0|^{2k}=4kf |Ric_0|^{2k}$. Integrating over compact manifold $M$ with respect to the Riemannian volume we obtain \begin{multline*} \int_M |Ric_0|^{2k} f^2 \,{dvol_g}=\frac {1}{n}\int_M g(\nabla |Ric_0|^{2k} f, \nabla f) \,{dvol_g}\\ =\frac {1}{n} \int_M |Ric_0|^{2k} |\nabla f|^2\,dvol + \frac {4k}{n}\int_M|Ric_0|^{2k} f^2 \,{dvol_g}. \end{multline*} Therefore, $$(n-4k)\int_M|Ric_0|^{2k} f^2 \,{dvol_g}=\int_M |Ric_0|^{2k} |\nabla f|^2\,{dvol_g},$$ hence choosing $k>n/4$ it follows $Ric_0=0$, i.e., $g$ is an Einstein metric. \end{proof} It should be noted that \eqref{e:Riem Hess eqn} is obtained in connection with infinitesimal conformal transformations on Einstein spaces, see \ref{e:inf conf v.f.}. Thus the unit sphere is characterized as the only complete Einstein Riemannian manifold of scalar curvature $n(n-1)$ admitting a non-homothetic conformal transformation, \cite{IshTa59}, \cite{O3}, \cite{Ta65}, \cite{YaNa59}, see also later in this Section for relations with the Yamabe problem on the Euclidean sphere. In addition, this result can be considered as a characterization of the unit sphere as a compact Einstein space which admits an eigenfunction belonging to the smallest {possible} eigenvalue of the Laplacian {on a} compact Einstein {space}. We remark that it is natural to consider characterizations of the unit sphere by its second eigenvalue $2(n + l)k$ of the Laplacian. In this case, the gradients of the corresponding eigenfunctions are infinitesimal projective transformations, which also gives a system of differential equations of order three satisfied by the divergence of an infinitesimal projective transformation on an Einstein space. Furthermore, it is shown that the complete Riemannian manifold admitting a non-trivial solution for the system is isometric to the unit sphere provided that the manifold is simply connected{,} \cite{EGKU03}, \cite{GKDU}, \cite{Bl75}, \cite{O65}, \cite{O3}. There are results in the K\"ahler case where an infinitesimal holomorphically projective transformation plays a role similar to that of projective one on Riemannian manifolds, \cite{O65}. We shall seek a characterization of the model CR and qc unit spheres through the first eigenfunctions of the respective sub-Laplacians \subsection{Conformal transformations} Let $(M,g)$ and $( M', g')$ be two Riemannian manifold of dimension $n$. A smooth map $F:M\rightarrow M'$ is called a conformal map if $F^* g'=\phi^{-2}\, g$ for some smooth positive function $\phi$. For our goals we shall consider $(M,g)=(M',g')$, $F$ a diffeomorphism, and let $\bar g=F^* g'$. In this case, we say that $F$ is a conformal difeomorphism while the metrics $g$ and $\bar g$ are called (point-wise) conformal to each other. For $n\geq 3$, we shall need the following well known formulas relating the traceless Ricci and scalar curvatures of the metrics $g$ and $\bar g$, \begin{align}\label{e:Ric_o conf change} \overline {Ric}_0=Ric_0+(n-2)\phi^{-1}(\nabla^2\phi)_0\\ \label{e:s conf change} \bar S=\phi^2S+2(n-1)\phi\triangle \phi -n(n-1)|\nabla \phi|^2, \end{align} where $\nabla$ is the Levi-Civita connection of $g$, $\triangle \phi=tr^g\nabla^2\phi$, and $(\nabla^2\phi)_0$ is the traceless part of the Hessian of $\phi$. A conformal vector field on a compact Riemannian manifold $(M, g)$ is a vector field $X$ whose flow consists of conformal transformations (diffeomorphisms). In the case the flow is a one-parameter group of isometries the vector field $X$ is called a Killing field. For $M$ compact, the algebra $\mathfrak{c}(M, g)$ of conformal vector fields is exactly the group of conformal diffeomorphisms $C(M, g)$ of $(M, g)$. It is worth recalling \cite{Ya57} and \cite{Li} that $X$ is a conformal vector field iff \begin{equation}\label{e:conf field} \mathcal{L}_X g = \frac {2}{n}(div_g X)g, \end{equation} where $\mathcal{L}_X$ is the Lie derivative operator and $div_g X$ is the divergence operator $(div_g X)\, vol_g = \mathcal{L}_X vol_g$ defined with the help of Riemannian volume element $vol_g$ associated to $g$. In particular, a gradient vector field $X=\nabla \phi $ is infinitesimal conformal vector iff \begin{equation}\label{e:inf conf v.f.} (\nabla^2\phi)_0=0. \end{equation} A short calculation, see \cite{Ya57}, \cite{Li} or \cite[(1.11)]{YaOb70}, shows that if $X$ is a conformal vector field then \begin{equation}\label{e:lap of div conf v.f.} \triangle (div\, X)=-\frac {1}{n-1}(div\, X)S-\frac {n}{2(n-1)}X(S). \end{equation} \subsection{The Yamabe problem - Obata's uniqueness theorem} The Yamabe type equation has its origin in both geometry and analysis. Yamabe \cite {Y} considered the question of finding a conformal transformations of a given Riemannian metric on a compact manifold to one with a constant scalar curvature, see also \cite{Tru} and \cite{A1,A2,A3,Au98}. When the ambient space is the Euclidean space $\mathbb{R}^n$, G. Talenti \cite{Ta} and T. Aubin \cite{A1, A2,A3} described all positive solutions of a more general equation, that is the Euler - Lagrange equation associated with the best constant, i.e., the norm, in the $L^p$ Sobolev embedding theorem. With the help of the stereographic projection, which is a conformal transformation, Yamabe's question for the standard round sphere turns into the $L^2$ case of Talenti's question. The solution of these special cases is an important step in solving the general Yamabe problem, the solution of which in the case of a compact Riemannian manifold was completed in the 80's after the work of T. Aubin and R. Schoen \cite{A1,A2,A3,Au98,Sch,Sch2,Sch89}, see also \cite{LP}. It should be noted that the "solution" used the positive mass theorem of R. Schoen and S.-T. Yau \cite{SchYa79}. An alternative approach was developed by A. Bahri \cite{Bahri} where solutions of the Yamabe equation were obtained through "higher" energies of the Yamabe functional. As well known, in general, there is no uniqueness of the metric of constant scalar curvature within a fixed conformal class. However, with the exception of the round sphere, according to Obata's theorem uniqueness (up to isometry) holds in a conformal class containing an Einstein metric. \subsubsection{The Yamabe problem and functional}\label{ss:R Yamabe functional} Let $(M^n,\, g)$ be a compact Riemannian manifold of dimension $n$. The Yamabe problem is to find a metric $\bar g$ point-wise conformal to the Riemannnian metric $g$ of constant scalar curvature $\bar S$. Clearly, this is a type of uniformization problem which is one generalization of the classical surface case. \subsubsection{Riemann surfaces - the 2-D case} In the 2-D case, where we are dealing with the uniformization of a closed orientable surface, if we set $\bar g = e^\phi g$, then the equation which needs to be solved is $$\triangle \phi -K=-\bar K e^\phi,$$ for some constant $\bar S$, where $S$ is the Gaussian curvature of $g$. By the {Gauss-Bonnet} formula $$2\pi\chi(M)=\int_M K \, dv_g,$$ which determines the sign of $\bar K$. By the \emph{uniformization theorem} of the universal cover $\hat M$ of $M$, $M$ is biholomorphic to $\hat M/G$ for some $G$-properly discontinuous subgroup of $Aut(\hat M)$. Thus, depending on the sign, $(M,g)$ is hyperbolic, parabolic, or elliptic Riemann surface, i.e., it is conformal to one of constant Gauss curvature $-1,\ 0,\ 1$. Explicitly, depending on its genus, $M$ is conformal (in fact biholomorphic) to a surface in one of next three cases: \begin{enumerate} \item $\mathbb{H}/\Gamma$, for a properly discontinuous $\Gamma$ subgroup of $PSL(2,R)$-the automorphism group of the unit disc $\mathbb{D}$, when the genus of M at least two; \item $\mathbb{C}/\Lambda$-elliptic curve, corresponding to a lattice $\Lambda=\{n_1\omega_1+n_2\omega_2\mid n_1, n_2\in Z\}, \omega/\omega_2\notin \mathbb{R}$ when the genus of $M$ is one; \item $S^2$ of genus $0$. \end{enumerate} \subsubsection{The higher dimensional cases} For $n\geq 3$ such a complete picture is not possible. It is customary to take the conformal factor in a way which is best suited for the problem, accordingly we begin with the form exhibiting the relation the critical Sobolev exponent. As well known, if we write the conformal factor in the form $\bar g=u^{4/(n-2)}\,g$ then the Yamabe problem becomes an existence problem for a positive solution to the Yamabe equation, see \eqref{e:s conf change}, \begin{equation}\label{e:Riem Yamabe} 4\frac {n-1}{n-2} \triangle u - {S}\cdot\,u \ =\ - \overline{{S}}\cdot\, u^{2^*-1}. \end{equation} where $\triangle u\ =\ \text{tr}^g (\nabla du)$, $\text{S}$ and $\overline{\text{S}}$ are the scalar curvatures of $g$ and $\bar g$, and $2^* = \frac {2n}{n-2}$ is the Sobolev conjugate exponent. The Yamabe problem \ref{ss:R Yamabe functional} is of variational natural as we remind next. The critical points of the {Einstein-Hilbert (total scalar curvature) functional} $$\Upsilon(\bar g)=\left ( \int_M \bar S\, dv_{\bar g}\right )/ \left (\int_M \, dv_{\bar g} \right )^{2/{2^*}}$$ are {Einstein metrics}. The \emph{Yamabe functional} is obtained by restricting $\Upsilon(\bar g)$ to the conformal class $[g]=\{\bar g=u^{4/(n-2)}g\mid\, 0<u\in \mathcal{C}^\infty(M)\}$ and defining (a conformally invariant functional) \begin{equation}\label{e:Yamabe Riem} \Upsilon_g (u) =\left ( \int_M 4\frac {n-1}{n-2}\ \abs{\nabla u}^2\ +\ {S}\, u^2\, dv_g\right )/ \left ( \int_M u^{2^*}\, dv_g\right)^{2/{2^*}}. \end{equation} The critical points, i.e., the solutions of $\frac {d}{dt} \Upsilon (u+t\phi)_{\vert_{t=0}}\ = \ 0, \quad \phi\in \mathcal{C}^\infty(M)$, are metrics of {constant scalar curvature} (Yamabe metrics) since they are given by the solutions of \eqref{e:Riem Yamabe} with $\bar S$ the corresponding "critical" energy level. The Yamabe constant of $(M,g)$ is \begin{equation*} \Upsilon (M,[g])\equiv\Upsilon ([g]) =\ \inf \{ \Upsilon_g (u):\ u>0 \} \end{equation*} The Yamabe invariant is the supremum $ \lambda(M)=\sup_{[g]} \Upsilon ([g])$. According to the result of Aubin and Talenti, for the round unit sphere $\Upsilon (S^n,[g_{st}])=n(n-1)\omega_n^{2/n}$. The existence of a Yamabe metric is the content of the next theorem, which collects a number of remarkable results, see for example \cite{LP} for a full account, \begin{thrm}[N. Trudinger, Th. Aubin, R. Schoen; A. Bahri] Let $(M^n, g)$, $n\geq 3$, be a compact Riemannian manifold. There is a $\bar g\in [g]$, s.t., $\bar S=const$. \end{thrm} The main steps in proof of the above theorem are as follows, see \cite{LP,Au98} for a full account. \begin{itemize} \item We have $\Upsilon ([g]) \leq \Upsilon (S^n, st)$. The Yamabe problem can be solved on any compact manifold M with $\Upsilon ([g]) < \Upsilon (S^n, [g_{st}])$, see \cite{Y}, \cite{Tru}, and \cite{A2}. \item If $n \geq 6$ then $\Upsilon (S^n, [g_{st}])-\Upsilon ([g]) \geq c \norm {W^g}^2$, hence the Yamabe problem can be solved if $n\geq 6$ and $M$ is not locally conformally flat, see \cite{A2}. \item If $3\leq n \leq 5$, or if $M$ is locally conformally flat, then the Yamabe problem has a solution since $\Upsilon (S^n, [g_{st}])- \Upsilon ([g]) \geq c m_o$, where $m_o$ is the mass of a one point blow-up (stereographic projection) of $M$, see \cite{Sch}. \item If $M$ is locally conformally flat a critical point of the Yamabe functional exists (which may be of higher than $\Upsilon (M,[g])$ energy), see \cite{Bahri}. \end{itemize} Given the above existence of a Yamabe metric on every compact Riemannian manifold it is natural to study the question of uniqueness. When $\Upsilon (M,[g])\leq 0$ the Yamabe metric is unique in its conformal class as implied by the maximum principle. However, for $\Upsilon (M,[g])>0$ this is no longer true. An example of non-uniqueness is provided by the round unit sphere as described next in \ref{ss:Obata uniqueness}. Another example was given by \cite{Sch89} in $S^1(R)\times S^n$. Remarkably, in the case of the sphere the set of solutions is non-compact (in the $\mathcal{C}^2$ topology), see further below, and it was conjectured in \cite{Sch89} that this is the only case, which became known as the compactness conjecture, see the review \cite{BrMa11} for further details and references. In short, the conjecture is true in the following cases: \begin{itemize} \item for a locally conformally flat manifold (different from the round sphere) \cite{Sch91a} and \cite{Sch91b}; \item for $n\leq 7$, see \cite{LiZh05} and \cite{Ma05}; \item for $8\leq n\leq 24$ provided that the positive mass theorem holds (this covers all cases of a spin manifold), see \cite{LiZh05}, \cite{LiZh07} for $8\leq n\leq 11$ and \cite{KhMaSch09} for $12\leq n\leq 24$. \end{itemize} Furthermore, the conjecture is not true for $n\geq 25$, see \cite{Br08} and \cite{BrMa09}. Putting the compactness conjecture aside we turn to the uniqueness result of Obata. \subsubsection{Obata's uniqueness theorem for the Yamabe problem}\label{ss:Obata uniqueness} The main result here is that the conformal class of an Einstein metric on a connected compact Riemannian manifold $(M,\bar g)$ contains a unique Yamabe metric unless $M$ is the round unit sphere $S^n$ in $\mathbb{R}^{n+1}$. It should be noted that if $M$ is not conformal to the round sphere the Yamabe metrics are nondegenerate global minima of the Einstein-Hilbert functional. The structure of the set of Yamabe metrics in conformal classes near a nondegenerate constant scalar curvature metric was considered in the smooth case in \cite{Koiso79}. An extension of Obata's result to a local uniqueness result for the Yamabe problem in conformal classes near to that of a nondegenerate solution was established in \cite{dlPZ12}. After this short background we turn to Obata's theorem. \begin{thrm}[\cite{Ob} and \cite{Ob72}]\label{t:Obata Yamabe} \begin{enumerate}[a)] \item Let $(M,\bar g)$ be a connected compact Riemannian manifold which is Einstein and $\bar g=\phi^{-2}g$. If $\bar S=S=n(n-1)$, then $\phi=1$ unless $(M,\bar g)=(S^n, g_{st})$. \item If $g$ is a Riemannian metric conformal to $g_{st}$, $g_{st}=\phi^{-2} g$, with scalar curvature $S=n(n-1)$, then $g$ is obtained from $g_{st}$ by a conformal diffeomorphism of the sphere, i.e., there is $\Phi\in \text{Diff}\,(S^n)$ such that $g=\Phi^* g_{st}$ and up to an additive constant $\phi$ is an eigenfunction for the first eigenvalue of the Laplacian on the round sphere. In particular, $\nabla\phi$ is a gradient conformal field and for some $t$ we have $\Phi=exp(t\nabla \phi)$-the one parameter group of diffeomorphisms generated by $\nabla\phi$. \end{enumerate} \end{thrm} \begin{proof} In the proof of part a) we use the argument of \cite{BoEz87} and \cite{LP} which is very close to Obata's argument but uses the "new" metric as a background metric rather than the given Einstein metric. Suppose $\bar g$ is Einstein, hence by \eqref{e:Ric_o conf change} we have $$0=\overline {Ric}_o= {Ric_o} + \frac {n-2}{\phi}(\nabla^2\phi)_0.$$ Therefore, $(\nabla^2\phi)_0\ = \ -\frac {\phi}{n-2}{Ric_0}.$ From the contracted Bianchi identity and $S$=const we have $\nabla^*Ric=\frac {1}{2}\nabla S =0,$ hence $\nabla^* \left ( Ric(\nabla \phi, . )\right) =(\nabla^* Ric) (\nabla \phi)+g(Ric,\nabla^2\phi)=\frac 12g(\nabla S, \nabla \phi) -\frac {\phi}{n-2}|Ric_o|^2. $ Integration over $M$ and an application of the divergence theorem shows that $g$ is \emph{also an Einstein metric}, $Ric_o =0$. This implies $(\nabla^2\phi)_0=0$, hence $\nabla \phi$ is a gradient conformal vector field, see \eqref{e:inf conf v.f.}. Now, from \eqref{e:lap of div conf v.f.} taking into account $S=n(n-1)$ it follows $\triangle (\triangle\phi+n\phi)=0$, hence by the maximum principle we have $\triangle u=-nu$, where $u=\phi+a$ for some constant $a$. Notice that we also have $(\nabla^2u)_0=0$. Hence by Obata's result in the eigenvalue Theorem \ref{t:Riem LichObata} either $u=$const or $g$ is isometric to $g_{st}$ and $u$ is a restriction of a linear function to $S^n$, $u=(a_ox_o+\dots+a_nx_n)\vert_{S^n}$, which implies the claimed form of $\phi$. \end{proof} We note that in the case of the round sphere, once we proved that $g$ is also Einstein, we can conclude that $g$ is isometric to $g_{st}$ since it is Einstein and conformally flat, $W=0$, see \cite{Kuh88}, \cite{KuRa09}, \cite{MRTZ09}, \cite{KiMa09} for further details and references on conformal transformations between Einstein spaces in a variety of spaces. Thus, there is an isometry $\Phi:(S^n,g)\rightarrow (S^n,g_{st})$, $\Phi^*g_{st}=\phi^{-2}g_{st}$ hence $F\in C(S^n,g_{st})$. Part b) of the above theorem shows that $\Phi$ belongs to the largest connected subgroup of $C(S^n,g_{st})$ and determines the exact form of $\phi$. The same conclusion can be reached with the help of the stereographic projection and relates the analysis to the Liuoville's theorem and the best constant in the $L^2$ Sobolev embedding theorem in Euclidean space. In fact, using the stereographic projection we can reduce to a conformal map of the Euclidean space, which sends the Euclidean metric to a conformal to it Einstein metric. By a purely local argument, see \cite{Br25}, the resulting system can be integrated, in effect proving also Liuoville's theorem, which gives the form of $\phi$ after transferring the equations back to the unit sphere. Such argument was used in the quaternionic contact setting \cite{IMV} to classify all qc-Einstein structures on the unit $4n+3$ dimensional sphere (quaternionic Heisenberg group) conformal to the standard qc-structure on the unit sphere. We will come back to the qc-Liouville theorem later in the paper, see Section \ref{t:qcLiouville}. In any case, the key point here which will be used in the sub-Riemannian CR or QC setting is that Obata's argument shows the validity of a system of partial differential equations, namely, $(\nabla^2\phi)_0=0$ assuming \eqref{e:s conf change} holds with $\bar g$ being Einstein and $g$ of constant scalar curvature. On the other hand, using the stereographic projection, Yamabe's equation on the round sphere turns into \eqref{e:Riem Yamabe} for the Euclidean Laplacian with $S=0$ and $\bar S$=const after interchanging the roles of $g$ and $\bar g$, i.e., assuming that $g$ is the "background" standard constant curvature metric and $\bar g$ is the "new" conformal to $g$ metric of constant scalar curvature. This is nothing but the equation characterizing the extremals of the variational problem associated to the $L^2$ Sobolev embedding theorem. An alternative to Obata's argument is then the symmetrization argument (described briefly in Section \ref{s:FS inequality}). \subsection{Sub-Riemannian comparison results and Yamabe type problems - a summary}\label{ss:sub-Riemannian compariosn note} The interest in relations between the spectrum of the Laplacian and geometric quantities justified the interest in Lichnerowicz-Obata type theorems in other geometric settings such as Riemannian foliations (and the eigenvalues of the basic Laplacian) \cite{LR98,LR02}, \cite{JKR11} and \cite% {PP11}, to CR geometry (and the eigenvalues of the sub-Laplacian) \cite{Gr}, \cite{Bar}, \cite{CC07,CC09a,CC09b}, \cite{ChW}, \cite{Chi06}, \cite{LL}, and to general sub-Riemannian geometries, see \cite{Bau2} and \cite{Hla}. Complete results have been achieved in the settings of (strictly pseudoconvex) CR, \cite{Gr},\cite{CC09a,CC09b}, \cite{CC07}, \cite{Chi06},\cite{LW,LW1},\cite{IVO,IV3}, and QC, \cite{IPV1,IPV2,IPV3}, geometries which shall be covered in Sections \ref{s:CR Lichnerowicz-Obata} and \ref{s:QC Lichnerowicz-Obata}. As far as other comparison results are concerned we mention (i) \cite{Ru94} for a Bonnet-Myers type theorem on general 3-D CR manifolds; (ii) \cite{Hughen}, where a Bonnet-Myers type theorem on a three dimensional Sasakian was proved. Both of the above papers use analysis of the second-variation formula for sub-Riemannian geodesics. (iii) \cite{ChY} for an isoperimetric inequalities and volume comparison theorems on CR manifolds. (iv) \cite{Bau2}, \cite{BauBonGarMun14}, \cite{BauKim14}, \cite{BauKimWa14}, \cite{BauWa14}, \cite{GrTh14a,GrTh14b} where an extension to the sub-Riemannian setting of the Bakry-Emery technique on curvature-dimension inequalities are used to obtain Myers-type theorems, volume doubling, Li-Yau, Sobolev and Harnack inequalities, Liouville theorem. Such inequalities are obtained usually under a transverse symmetry assumption. The latter means that we are actually dealing with a Riemannian manifold with bundle like metrics which are foliated by totally geodesic leaves. Thi scondition equivalent to vanishing torsion in the QC setting (qc-Einstein) and is not very far from the Sasakian case (vanishing torsion) in the CR case. (v) \cite{AgLee14}, \cite{AgBaRi14}, \cite{AgL11}, where sub-Riemannian geodesics and measure-contraction properties are used to establish for Sasakian manifolds results such as a Bishop comparison theorem, Laplacian and Hessian comparison, volume doubling, Poncar\'e and Harnack inequalities, and Liuoville theorem. comparison results in the Sasakian case. (vi) \cite{Hla} for Lichnerowicz type estimates and a Bonnet-Myers theorems in some special sub-Riemannian geometries. A variant of the Yamabe problem in the setting of a compact strictly pseudoconvex pseudohermitian manifold (called here simply CR manifold) is the CR Yamabe problem where one seeks in a fixed pseudoconformal class of pseudo-Hermitian structures on a compact CR manifold one with constant scalar curvature (of the canonical Tanaka-Webster connection). After the works of D. Jerison \& J. Lee \cite{JL1} - \cite{JL4} and N.Gamara \& R. Yacoub \cite{Ga}, \cite{GaY} the CR Yamabe problem on a compact manifold is complete. The case of the standard CR structure on the unit sphere in $\mathbb{C}^n$ is equivalent to the problem of determining the best constant in the $L^2$ Folland \& Stein \cite{FS} Sobolev type embedding inequality on the Heisenberg group. The best constant in the $L^2$ Folland \& Stein inequality together with the minimizers were determined recently using a different from \cite{JL3} method by Frank \& Lieb \cite{FrLi}, see also \cite{BFM}. Nevertheless this simpler approach \emph{does not} yield the uniqueness result of D. Jerison \& J. Lee. A positive mass theorem in the three dimensional case was proven recently in \cite{ChMaYa13}. In the other case of interest, the qc-Yamabe problem was studied in \cite{IMV, IMV1, IMV2} and \cite{Wei}. According to \cite{Wei} the Yamabe constant of a compact qc manifold is less than or equal to that of the standard qc sphere. Furthermore, if the constant is strictly less than the corresponding constant of the sphere, the qc-Yamabe problem has a solution, i.e., there is a conformal 3-contact form for which the qc-scalar curvature is constant. The Yamabe constant of the standard qc structure on the unit $(4n+3)$-dimensional sphere was determined in \cite{IMV2} with the help of a clever center of mass argument following in the footsteps of the CR case \cite{FrLi} and \cite{BFM}. However, due to the limitations of the method \cite{IMV2} does not exclude the possibility that in the qc-conformal class of the standard qc structure there are qc Yamabe metrics of higher energies. The seven dimensional case was settled completely earlier in \cite{IMV1}. A conformal curvature tensor was found in \cite{IV}, which should prove useful in establishing existence of a solution to the qc-Yamabe problem in the qc locally non-flat case. Finally, we mention \cite{ChLZh1,ChLZh2} where the sharp Hardy-Littlewood-Sobolev inequalities in the quaternion and octonian versions of the approach found by Frank and Lieb was developed. In particular, at this point the sharp constants in the Hardy-Littlewood-Sobolev inequalities on all groups of Iwasawa type are known. \section{The Folland-Stein inequality on groups of Iwasawa type}\label{s:FS inequality} We start by recalling the following embedding theorem due to Folland and Stein \cite{FS}. \label{T:Folland and Stein} Let $ {{\boldsymbol{G}}}$ be a Carnot group ${{\boldsymbol{G}}}$ of homogeneous dimension $Q$, fixed metric $g$ on the the "horizontal" bundle spanned by the first layer {\ and Haar measure $dH$}. For any $1<p<Q$ there exists $S_p=S_p({{\boldsymbol{G}}})>0$ such that for $u\in C^\infty_o({\boldsymbol{G}})$ we have \begin{equation} \label{FS} \left( \int_{\boldsymbol{G}} \, |u|^{p^*}\, dH\right)^{1/p^*} \leq\ S_p\ \left(\int_{\boldsymbol{G}} |Xu|^p\, dH\right)^{1/p}, \end{equation} where $|Xu|=\sum_{j=1}^m |X_ju|^2$ with $X_1,\dots, X_m$ denoting an orthonormal basis of the first layer of ${{\boldsymbol{G}}}$ {and $p^*= \frac {pQ}{Q-p}$.} In the case ${\boldsymbol{G}}=\mathbb{R}^n$ this embedding is nothing but the Sobolev embedding theorem. We insist on $X_1,\dots, X_m$ denoting an orthonormal basis of the first layer in order to have a well defined constant which obviously depends on the chosen (left invariant) metric. For the sake of brevity we do not give the definition of a Carnot group since our focus is in the particular case of groups of Iwasawa type, in which case there is a natural metric. Also, the case $p=1$ which we did not include above, is the isoperimetric inequality, see \cite{CDG2} for the proof in a much wider setting, which as well known \cite{FeFl60,Maz61}, see also \cite{Ta} and \cite{S-C02}, implies the whole range of inequalities \eqref{FS}. The most basic fact of the above inequality is its invariance under translations and dilations. The latter fact determines the relation between the exponents $p$ and $p^*$ appearing in both sides. For a function $u\in C^{\infty}_o ({\boldsymbol{G}}) $ we let \begin{equation}\label{translation} \tau_h u\ \overset{def}{=}\ u\circ \tau_h, \quad\quad \quad h\in {\boldsymbol{G}} , \end{equation} where $\tau_h:{\boldsymbol{G}}\to {\boldsymbol{G}}$ is the operator of left-translation $\tau_h(g) = hg$, and also \begin{equation}\label{scaling} u_\lambda\ \equiv\ \lambda^{Q/p\text{*}}\ \delta_\lambda u\ \overset{def}{=}\ \lambda^{Q/p\text{*}}\ u\circ \delta_\lambda, \quad\quad\quad \lambda >0. \end{equation} It is easy to see that the norms in the two sides of the Folland-Stein inequality are invariant under the translations (\ref{translation}) and the rescaling \eqref{scaling}. Let $S_p$ be the best constant in the Folland-Stein inequality, i.e., the smallest constant for which \eqref{FS} holds. The equality is achieved on the space ${\overset{o}{\mathcal{D}}}\,^{1,p}({\boldsymbol{G}})$, where for a domain $\Omega\subset {\boldsymbol{G}}$ the space ${\overset{o}{\mathcal{D}}}\,^{1,p}(\Om)$ is defined as the closure of $C_o^{\infty}(\Omega)$ with respect to the norm \begin{equation} \norm{u}_{{\overset{o}{\mathcal{D}}}\,^{1,p}(\Om)} = \left(\int_{\Omega} |Xu|^p dH\right)^{1/p}. \end{equation} This fact was proved in \cite{Va1} with the help of P.L. Lions' method of concentration compactness. The question of determining the norm of the embedding, i.e., the value of the best constant is open with the exception of the Euclidean case and the $p=2$ case on the (two step Carnot) groups of Iwasawa type. In the Euclidean case, a symmetrization argument involving symmetric decreasing rearrangement, see \cite{Ta62}, can be used to show that equality is achieved for radial functions which can be determined explicitly. As of now there is no such argument in the non-Euclidean setting which to a large degree is the reason for the much more sophisticated analysis in the sub-Riemannian setting. However, the recently found approach \cite{FrLi} and \cite{BFM} based on the center of mass argument allows the determination of the sharp constant (in fact in the Hardy-Littlewood-Sobolev inequality) in the geometric setting of groups of Iwasawa type when $p=2$. This analysis exploits the Cayley transform and the conformal invariance of the associated Euler-Lagrange equation which is {the} Yamabe equation on the corresponding Iwasawa group, \begin{equation}\label{Yamabeomegasymm} \triangle u\ =- u^{\frac{Q+2}{Q-2}}, \qquad u\in {\overset{o}{\mathcal{D}}}\,^{1,2}({\boldsymbol{G}}), \quad u\geq 0. \end{equation} Of course, in order to give a geometric meaning of the equation one needs to use the relevant geometries and their "canonical" connections which we do {in} Section \ref{s:Iwasawa sub-Riem geom}. In the Euclidean and CR cases these are just the well known Levi-Civita and Tanaka-Webster connections. In the quaternionic and octonian {cases} the geometric picture emerged only after the work of Biquard \cite{Biq1,Biq2}. The goal of this section is to give some ideas surrounding the analysis of the Yamabe equation as a partial differential equation and some of the known results on the optimal constants which largely belong to the area of analysis. The key results on the optimal constants are summarized in the following two theorems in which $m$ is the dimension of the first layer, while $k$ is the dimension of the center of the Iwasawa algebra. \begin{thrm}[\cite{JL3},\cite{FrLi},\cite{IMV1,IMV2},\cite{ChLZh1}]\label{T:Iwasawa groups Minimizers} Let ${\boldsymbol{G}}$ be a group of Iwasawa type. For every $u\in D^{1,2}( {\boldsymbol{G}})$ one has the Folland-Stein inequality \eqref{FS} with \begin{equation}\label{best} S_2= \frac{1}{\sqrt{m(m + 2(k-1))}}\ 4^{\frac {k}{m+2k}} \pi^{-\frac {m+k}{2(m+2k)}}\ \left(\frac{\Gamma(m + k)} {\Gamma\left(\frac {m+k}{2}\right)}\right)^{\frac {1}{m+2k}} . \end{equation} An extremal is given by the function \begin{equation}\label{e:FS extremal fns} F(g)= \gamma(m,k)\ \left[(1 + |x(g)|^2)^2+ 16 |y(g)|^2)\right]^{-(Q-2)/4}, \end{equation} where \[ \gamma(m,k)= \left[4^k\ \pi^{-(m+k)/2(m+2k)}\ \frac{\Gamma(m + k)}{\Gamma((m + k)/2)} \right]^{(m+2(k-1))/2(m+2k)}. \] Any other non-negative extremal is obtained from $F$ by \eqref{translation} and \eqref{scaling}. \end{thrm} We remark that \eqref{e:FS extremal fns} is a solution to the Yamabe equation on any group of Heisenberg type \cite{GV2} which was found earlier (and seems to have been forgotten) in the case of Iwasawa groups in \cite[Proposition2]{KapPutz1}. It also should be noted that \cite{JL3} and \cite{IMV1} actually determine all critical points of the associated to \eqref{FS} variational problem rather than only the functions with lowest energy. In fact, \cite{JL3} solves completely the Yamabe equation \eqref{Yamabeomegasymm} on the Heisenberg group while \cite{IMV1} achieves this on the seven dimensional quaternionic Heisenberg group (the higher dimensional case {is settled in the preprint \cite{IMV15a}}). We report on the ideas behind these proofs in Sections \ref{ss:Jerison and Lee} and \ref{ss:7D QC Heisenebrg Yamabe} which involve ideas inspired by Theorem \ref{t:Obata Yamabe}. In the remaining cases of Iwasawa type groups the partial result in the next Theorem \ref{T:CS} supports the general agreement that \eqref{e:FS extremal fns} gives all solutions. \begin{thrm}[\cite{GV}]\label{T:CS} All partially symmetric solutions of the Yamabe equation on a group of Iwasawa type are given {by} \eqref{e:FS extremal fns} up to dilation and translation. \end{thrm} For the definition of partially symmetric solution we refer to Section \ref{s:Iwasawa partial symmetry}. \subsection{Groups of H-type and the Iwasawa groups} Let $\mathfrak n$ be a 2-step nilpotent Lie algebra equipped with a scalar product $<.,.>$ for which $\mathfrak n = V_1 \oplus V_2$-an orthogonal direct sum, $V_2$ is the center of $\mathfrak n$. Consider the map $J:V_2\to End(V_1)$ defined by \begin{equation}\label{J1} <J(\xi_2)\xi_1',\xi_1''>\ =\ <\xi_2,[\xi_1',\xi_1'']>,\ \text{ for}\ \xi_2\in V_2\ \text{ and }\ \xi_1', \xi_1''\in V_1 . \end{equation} By definition we have that $J(\xi_2)$ is skew-symmetric. Adding the additional condition that it is actually an almost complex structure on $V_1$ when $\xi_2$ is of unit length \cite{K1} motivates the next definitions. A 2-step nilpotent Lie algebra $\mathfrak n$ is said to be of \emph{Heisenberg type} \index{Heisenberg type!algebra} if for every $\xi_2 \in V_2$, with $|\xi_2|=1$, the map $J(\xi_2):V_1\to V_1$ is orthogonal. A simply connected connected Lie group ${\boldsymbol{G}}$ is called of Heisenberg type (or H-type) \index{H-type group} if its Lie algebra $\mathfrak n$ is of Heisenberg type. We shall use the exponential coordinates and regard $G=exp\ \mathfrak n$, so that the product of two elements of $N$ is \begin{equation}\label{e:H-type group product} (\xi_1,\xi_2)\cdot (\xi'_1,\xi'_2)=(\xi_1+\xi'_1, \xi_2+\xi'_2+\frac 12[\xi_1,\xi'_1]), \end{equation} taking into account the Baker-Campbell-Hausdorff formula. Correspondingly we shall use $V_i$, $i=1,2$ to also denote the sub-bundle of left invariant vector fields which coincides with the given $V_i$ at the identity element. In \cite{K1} Kaplan found the explicit form of the fundamental solution of the sub-Laplacian on every group of H-type, where the sub-Laplacian is the operator \begin{equation}\label{e:sub-laplacian H-type} \triangle\ =\ \underset {j=1}{\overset {m}{\sum}}\,X_j^2, \end{equation} for vector fields $X_j$, $j=1, \dots,m$ which are an orthonormal basis of $V_1$. On a group $N$ of Heisenberg type there is a very important homogeneous norm (gauge) \index{homogeneous!norm} given by \index{gauge} \begin{equation}\label{Hgauge} N(g)=\bigl( \abs{\xi_1(g)}^4+16\abs{\xi_2(g)}^2\bigr)^{1/4}, \end{equation} which induces a left-invariant distance. Kaplan proved in \cite{K1} that in a group of Heisenberg type, in particular in every Iwasawa group, the fundamental solution $\Gamma$ of the sub-Laplacian $\mathcal{L}$, see \eqref{e:sub-laplacian H-type}, is given by the formula \begin{equation}\label{GammaH} \Gamma(g,h)\ =\ C_Q\ N(h^{-1} g)^{-(Q-2)}, \hskip.7in g,h\in N, g\neq h, \end{equation} where $C_Q$ is a suitable constant. \begin{rmrk}\label{r:gromov limit} It is known that the distance induced by the gauge \eqref{Hgauge} is the Gromov limit of a one parameter family of Riemannian metrics on the group $N$ \cite{Ko2}, see also \cite{BRed} and \cite{CDPT07}. \end{rmrk} Kaplan and Putz \cite{KapPutz2}, see also [\cite{Ko2}, Proposition 1.1], observed that the nilpotent part $N$ in the Iwasawa \index{Iwasawa!decomposition} decomposition ${\boldsymbol{G}}=NAK$ of every semisimple Lie group ${\boldsymbol{G}}$ of real rank one is of Heisenberg type. We shall refer to such a group as \emph{Iwasawa group} \index{Iwasawa!group} and call the corresponding Lie algebra \emph{Iwasawa algebra}. The Heisenberg type groups allowed for the generalization of many important concepts in harmonic analysis and geometry, see \cite{KapPutz2}, \cite{KapR}, \cite{Ko2}, \cite{DamRic2} and the references therein, in addition to the above cited papers. Another milestone was achieved in \cite{CDKR}, which allowed {to circumvent} the classification {of the }rank one symmetric spaces and the heavy machinery of the semisimple Lie group theory, when studying the non-compact symmetric spaces of real rank one. Specifically, in \cite{CDKR} the authors considered the H-type algebras satisfying the so called $J^2$ condition defined in \cite{CDKR}, see also \cite{CDKR2}. \begin{dfn}\label{d:J2 cond} We say that the H-type algebra $\mathfrak n$ satisfies the $J^2$ condition \index{$J^2$ condition} if for every $\xi_2, \xi_2'\in V_2$ which are orthogonal to each other, $<\xi_2, \xi_2'>=0$, there exists $\xi_2''\in V_2$ such that \begin{equation}\label{e:J2 cond} J(\xi_2)J(\xi_2')=J(\xi_2''). \end{equation} \end{dfn} A noteworthy result here is the following Theorem of \cite{CDKR}, see also \cite{Ciatti}, which can be used to show that if $N$ is an H-type group, then the Riemannian space $S=NA$ is symmetric iff the Lie algebra $\mathfrak n$ of $N$ satisfies the $J^2$ condition, see [\cite{CDKR}, Theorem 6.1]. \begin{thrm}\label{t:Iwasawa is J2} If $\mathfrak n$ is an H-type algebra satisfying the $J^2$-condition, then $\mathfrak n$ is an Iwasawa type algebra. \end{thrm} This fundamental result has many consequences, among them {it allows the} unified proof of some classical results on symmetric spaces, in addition to some beautiful properties of extensions of the classical Cayley transform, inversion and Kelvin transform, which are of a particular importance for our goals. From a geometric point of view, the above Iwasawa groups can be seen as the nilpotent part in the Iwasawa decomposition of the isometry group of the non-compact symmetric spaces $M$ of real rank one. Such a space can be expressed as a homogeneous space $G/K$ where $G$ is the identity component of the isometry group of $M$, i.e., one of the simple Lorentz groups $SO_o(n,1)$, $SU(n,1)$, $Sp(n,1)$ or $F_{4(-20)}$, and $K$ is a maximal compact subgroup of $G$, see \cite{Helgason}, namely, $K = SO(n)$, $SU(n)$, $Sp(n)Sp(1)$, or $Spin(9)$, respectively, see for example \cite[Theorem 8.12.2]{Wolf} or \cite{Helgason}. Thus $M=H^n_\mathbb{K}$ is one of the hyperbolic spaces over the real, complex, quaternion or Cayley (octonion) numbers, respectively. As well known, these spaces carry canonical Riemannian metrics with sectional curvature $k= -1$ for $\mathbb{K}=\mathbb{R}$ and $-1 < k < -1/4$ in the remaining cases cases. Here, $\mathbb{K}$ denotes one of the real division algebras: the real numbers $\mathbb{R}$, the complex numbers $\mathbb{C}$, the quaternions $\mathbb{H}$, or the octonions $\mathbb{O}$. Writing $G=NAK$ and letting $S=NA$, $A$-one-dimensional Abelian subalgebra, we have that $S$ is a closed subgroup of $G$, which is isometric with the hyperbolic space \index{hyperbolic space} $M$, thus giving the corresponding hyperbolic space \index{hyperbolic space!Lie group structure} a Lie group structure. The nilpotent part $N$ is isometrically isomorphic to $\mathbb{R}^n$ in the degenerate case when the Iwasawa group is Abelian or to one of the Heisenberg groups $\boldsymbol {G\,(\mathbb{K})} \ =\ \mathbb{K}^n\times\text {Im}\, \mathbb{K}$ \index{Heisenberg group $\boldsymbol {G\,(\mathbb{K})}$} with the group law given by \begin{equation}\label{e:H-type Iwasawa groups} (q_o, \omega_o)\circ(q, \omega)\ =\ (q_o\ +\ q, \omega\ +\ \omega_o\ + \ 2\ \text {Im}\ q_o\, \bar q), \end{equation} where $q,\ q_o\in\mathbb{K}^n$ and $\omega, \omega_o\in \text {Im}\, \mathbb{K}$. In particular, in the non-Euclidean case the Lie algebra $\mathfrak n$ of $N$ has center of dimension $\dim V_2=1$, $3$, or $7$. Iwasawa groups are distinguished also by the properties of the sphere product $S_1(R_1)\times S_2(R_2)$, where, for $j=1,2$, $S_j(R_j)$ is the sphere of radius $R_j$ in $V_j$-the two layers of the 2-step nilpotent Lie algebra. In fact, for a group of Iwasawa type the Kostant double-transitivity theorem \index{Kostant double-transitivity theorem} shows that the action of $A(N)$ is transitive, where as before $A(N)$ stands for the orthogonal automorphisms of $N$, see \cite[Proposition 6.1]{CDKR2}. This fact points to the importance of the bi-radial or cylindrically symmetric functions. Notice that both the fundamental solution of the sub-Laplacian and the known solutions of the Yamabe equation have such symmetry, see \eqref{GammaH} and \eqref{e:FS extremal fns}. Motivated by the way the Iwasawa type groups appear as "boundaries" of the hyperbolic spaces, Damek \cite{Dam2} introduced a generalization of the hyperbolic spaces as follows. For a group $N$ of H-type consider a semidirect product with a one dimensional Abelian group, i.e., take the multiplicative group $A=\mathbb{R}^+$ acting on an H-type group by dilations given in exponential coordinates by the formula $\delta_{a}(\xi_1,\xi_2)=(a^{1/2}\xi_1, a \xi_2)$ and define $S=NA$ as the corresponding semidirect product. Thus, the Lie algebra of $S$ is $\mathfrak s = V_1\oplus V_2\oplus \mathfrak a$, $\mathfrak n=V_1\oplus V_2$, with the bracket extending the one on $\mathfrak n$ by adding the rules \begin{equation}\label{e:solvable extension bracket} [\zeta,\xi_1 ] = \frac 12\xi_1, \quad [\zeta,\xi_2]=\xi_2\quad \xi_i\in V_i, \end{equation} where $\zeta$ is a unit vector in $\mathfrak a$, so that $S$ is the connected simply connected Lie group with Lie algebra $\mathfrak s$. In the coordinates $(\xi_1, \xi_2, a)=\exp (\xi_1+\xi_2) \exp (\log a\zeta)$, $a>0$, which parameterize $S=exp\, \mathfrak s$, the product rule of $S$ is given by the formula \begin{equation}\label{e:product on S} (\xi_1,\xi_2,a)\cdot (\xi'_1,\xi'_2,a')=(\xi_1+a^{1/2}\xi'_1,\, \xi_2+a\xi'_2+\frac 12a^{1/2}[\xi_1,\xi'_1],\, aa'), \end{equation} for all $(\xi_1,\xi_2,a),\, (\xi'_1,\xi'_2,a')\in \mathfrak n\times \mathbb{R}^+$. Notice that $S$ is a solvable group. We equip the Lie algebra $\mathfrak s$ with the inner product \begin{equation}\label{e:metric on S} <(\xi_1, \xi_2, a), (\tilde\xi_1, \tilde\xi_2, \tilde a)>=<(\xi_1, \xi_2), (\tilde\xi_1, \tilde\xi_2)> + a\tilde a \end{equation} using the fixed inner product on $\mathfrak n$ and then define a corresponding translation invariant Riemannian metric on $S$. The main result of \cite{Dam2} is that the group of isometries $Isom(S)$ of $S$ is as small as it can be and equals $A(S)\ltimes S$ with $S$ acting by left translations, unless $N$ is one of the Heisenberg groups \eqref{e:H-type Iwasawa groups}, i.e., $S$ is one of the classical hyperbolic spaces. Here, $A(S)$ denotes the group of automorphisms of $S$ (or $\mathfrak s$) that preserve the left-invariant metric on $S$. The spaces constructed in this manner became known as Damek-Ricci \index{Damek-Ricci space} spaces, see \cite{BeTrVa} for more details. It was shown in \cite{DamRic} that the just described solvable extension \index{solvable extension} of H-type groups, which are not of Iwasawa type, provide noncompact counter-examples to a conjecture of Lichnerowicz, which asserted that harmonic Riemannian spaces \index{harmonic Riemannian spaces} must be rank one symmetric spaces. \subsection{The Cayley transform}\label{ss:cayley tranform H-type} In this section we focus on the Cayley transform, of which we shall make extensive use later. Here, we give the well known abstract definition valid in the setting of groups of H-type. Other explicit formulas will be given in the CR and QC cases in Sections \ref{ss:CR geometry} and \ref{ss:qc sphere}. Starting from an $H$-type group, its solvable extension $S$ defined above has the following realizations, \cite{DamRic2}, \cite{CDKR} and \cite{CDKR2}. First, consider the "Siegel domain" or an upper-half plane model of the hyperbolic space \begin{equation}\label{e:siegel model} D=\{ p=(\xi_1,\xi_2,a)\in \mathfrak s=V_1\oplus V_2\oplus \mathfrak a: \ a>\frac 14|\xi_1|^2\}. \end{equation} Consider the map $\Theta:S\rightarrow S$, \begin{equation}\label{e:Theta} \Theta(\xi_1,\xi_2, a) = (\xi_1,\xi_2, a+\frac 14|\xi_1|^2), \end{equation} which is injective map of $S$ into itself. Here we use $a$ to denote the element $a\zeta\in A$, $\zeta$ defined after \eqref{e:solvable extension bracket}, and we regard $D$ as a subset of $S$ using the exponential coordinates. Thus, the group $S$ acts simply transitively on $D$ by conjugating left multiplication in the group $S$ by $\Theta$, $s\cdot p=\Theta s\cdot(\Theta^{-1}p)$ for $s\in S$ and $p\in D$, while $N$ acts simply transitively on the level sets of $h=a-\frac 14|\xi_1|^2$. In particular, we can define an invariant metric on $D$ by pulling via $\Theta$ the left-invariant metric \eqref{e:metric on S} of $S$ to $D$, thus making $\Theta$ an isometry, cf. [\cite{CDKR2}, (3.3)]. Second, there is the "ball" model of $S$, \begin{equation}\label{e:ball model} B = \{ (\xi_1,\xi_2, a) \in \mathfrak s=V_1\oplus V_2\oplus \mathfrak a: \ |\xi_1|^2+|\xi_2|^2+a^2< 1\}, \end{equation} equipped with the metric obtained from $D$ via the inverse of the so called \emph{Cayley transform} $\mathcal{C}:B\rightarrow D$ \index{Cayley transform} defined by $\mathcal{C}(\xi_1,\xi_2, a)=(\xi_1{'},\xi_2{'}, a{'} )$, where \begin{equation}\label{e:cayley for H-type} \begin{aligned} & \xi_1'= \frac {2}{(1 - a)^2 + |\xi_2|^2}\, \left ( (1 - a)\xi_1 + J(\xi_2)\xi_1\right),\\ & \xi_2'= \frac {2}{(1 - a)^2 + |\xi_2|^2}\, \xi_2, \qquad a'=\frac {1-a^2 - |\xi_2|^2 }{(1 - a)^2 + |\xi_2|^2}. \end{aligned} \end{equation} The inverse map $\mathcal{C}^{-1}:D\rightarrow B$ is given by $\mathcal{C}^{-1}(\xi_1{'},\xi_2{'}, a{'} )= (\xi_1,\xi_2, a)$, where \begin{equation}\label{e:inverse cayley for H-type} \begin{aligned} & \xi_1= \frac {2}{(1 +a{'} )^2 + |\xi_2{'}|^2}\, \left ( (1 +a{'})\xi_1{'} - J(\xi_2{'})\xi_1{'} \right),\\ & \xi_2 = \frac {2}{(1 +a{'} )^2 + |\xi_2{'}|^2}\, \xi_2{'}, \qquad a=\frac {-1+a{'} ^2 - |\xi_2{'} |^2}{(1 +a{'} )^2 + |\xi_2{'}|^2}. \end{aligned} \end{equation} For other versions of the Cayley transform see [\cite{FarKor}, Chapter X]. The Jacobian of $\mathcal{C}$ and its determinant were computed in \cite{DamRic2}. The latter is given by the formula $\det\mathcal{ C}' (\xi_1,\xi_2, a) = 2^{m+k+1}\left ((1-a)^2+|\xi_2|^2 \right )^{-(m+2k+2)/2}$, where, as before, $m=\dim V_1$, $k=\dim V_2$. It is very important and we shall make use of the fact that the Cayley transform can be extended by continuity to a bijection (denoted by the same letter!) \begin{equation}\label{e:Cayley transform to Siegel bdry} C:\partial B \setminus \{(0,0,1)\}\rightarrow \partial D, \end{equation} where $(0,0,1)$ (referred to as "$\zeta$" for short) is the point on the sphere where $\xi_1=\xi_2=0$ and the third component is $\zeta$ in agreement with out notation set after equation {\eqref{e:Theta}}. The boundaries of the ball and Siegel domain models are, respectively, \begin{equation}\label{e:bdry siegel model} \Sigma\equiv\partial D=\{ p=(\xi_1',\xi_2',a')\in \mathfrak s=V_1\oplus V_2\oplus \mathfrak a: \ a'=\frac 14|\xi_1'|^2\} \end{equation} and \begin{equation}\label{e:bdry ball model} \partial B = \{ (\xi_1,\xi_2, a) \in \mathfrak s=V_1\oplus V_2\oplus \mathfrak a: \ |\xi_1|^2+|\xi_2|^2+a^2= 1\}, \end{equation} The group of Heisenberg type $N$ can be identified with $\Sigma$ via the map \begin{equation}\label{e:identify bdry of siegel and H-group} (\xi_1', \xi_2')\mapsto (\xi_1', \xi_2', \frac 14|\xi_1'|^2). \end{equation} With this identification we obtain the form of the Cayley transform (stereographic projection) identifying the sphere minus the point "$\zeta$" and the H-type group, $\mathcal{C}:\partial B \setminus \{(0,0,1)\}\rightarrow N$\index{Cayley transform} defined by $\mathcal{C}(\xi_1,\xi_2, a)=(\xi_1{'},\xi_2{'})$, where \begin{equation}\label{e:Cayley transform to H-type group} \begin{aligned} & \xi_1'= \frac {2}{(1 - a)^2 + |\xi_2|^2}\, \left ( (1 - a)\xi_1 + J(\xi_2)\xi_1\right),\\ & \xi_2'= \frac {2}{(1 - a)^2 + |\xi_2|^2}\, \xi_2. \end{aligned} \end{equation} Later, we shall make use of this "boundary" Cayley transform in the case of the Heisenberg and quaternionic Heisenberg group in which place we shall give some other explicit formulas. In particular, we shall use that the Cayley transform is a pseudoconformal map in the CR case and quaternionic contact conformal transformation in the QC case. The Cayley transform is also a 1-quasiconformal map \cite{Banner}, see also \cite{ACD}. The definition of the "horizontal" space in the tangent bundle of the sphere and the distance function on the sphere require a few more details for which we refer to \cite{CDKR2} and \cite{Banner}. Multicontact maps and their rigidity in Carnot groups have been studied in \cite{Pansu87}, \cite{Reimann01}, \cite{Ko05}, \cite{CdMKR02}, \cite{CdMKR05}, \cite{CC06}, \cite{dMO10}, \cite{Ot05}, \cite{Ot08}, \cite{OtWa11}. \subsection{Regularity of solutions to the Yamabe equation}\label{s:regularity} In order for the geometric analysis to proceed we need the next regularity result for the Euler-Lagrange equation associated to the problem of the optimal constant in \eqref{FS}. \begin{thrm}\label{t:harnack and holder regul} Let $\Omega$ be an open set in a Carnot group ${\boldsymbol{G}}$. Suppose $u\in {\overset{o}{\mathcal{D}}}\,^{1,p}(\Om)$ is a weak solution to the equation \begin{equation}\label{e:e7.10} \sum_{i=1}^m X_i(|Xu|^{p-2}X_iu)\ =\ -\ V\ u^{p-1} \quad \quad \text{in} \quad \Omega. \end{equation} \begin{enumerate}[a)] \item If $u\geq 0$ and $V\in L^t(\Omega)$ for some $t\ >\ \frac{Q}{p}$, then $u$ satisfies the Harnack inequality: for any Carnot-Carath\'edory (or gauge) ball $B_{R_0}(g_0)\subset\Omega$ there exists a constant $C_0>0$ such that \begin{equation}\label{e:harnack} \esssup {B_R} \ u \leq C_0 \,\essinf {B_R} \ u, \end{equation} for any Carnot-Carath\'edory (or gauge) ball $B_{R}(g)$ such that $B_{4R}(g)\subset B_{R_0}(g_0)$. \item If $u\in {\overset{o}{\mathcal{D}}}\,^{1,p}(\Om)$ is a weak solution to \eqref{e:e7.10} and $V\in L^t(\Omega)\cap L^{Q/p}(\Omega)$, then $u\in \Gamma_\alpha (\Omega) $ for some $0<\alpha<1$. \item If $u\in{\overset{o}{\mathcal{D}}}\,^{1,2}(\Om)$ is a non-negative solution of the Yamabe equation on the domain $\Omega$, \begin{equation}\label{e:Yamabe in sec regul} \triangle u \ = \ - u^{2^*-1}, \end{equation} then either $u>0$ and $u\in \mathcal{C}^\infty\, (\Omega)$ or $u\equiv 0$. \end{enumerate} \end{thrm} The H\"older regularity of weak solutions of equation \eqref{e:e7.10} follows from a suitable adaptation of the classical De Giorgi-Nash-Moser result. The higher regularity when $p=2$ follows by an iteration argument based on sub-elliptic regularity. A detailed proof of Theorem \ref{t:harnack and holder regul} can be found in \cite[Theorem 1.6.9]{IV2}. It is simply a combination of the fundamental Harnack's inequality of \cite[Theorem 3.1]{CDG1} (valid for H\"ormander type operators), the boundedness of the weak solution \cite[Theorem 4.1]{Va1}, the regularity of \cite[Theorem 3.35]{CDG1}, and the sub-elliptic regularity result concerning H\"ormander type operators acting on non-isotropic Sobolev or Lipschitz spaces of \cite{FS,F2}, see also \cite{F'77} for a general overview and further details. Note {that} these results together with the idea of \cite{FS} to "osculate" with the Heisenberg group carry over to obtain $\mathcal{C}^\infty$ regularity in the CR and QC settings, see \cite{JL1,JL2} and \cite{Wei} for details. \subsection{Solution of the Yamabe type equation with partial symmetry}\label{s:Iwasawa partial symmetry} By Theorem \ref{t:harnack and holder regul} any weak solution of the Yamabe equation is actually a smooth bounded function which is everywhere strictly positive, $u>0$ and $u\in \mathcal{C}^\infty\, (\Omega)$. The symmetries we are concerned are the following. \begin{dfn}\label{D:symm} Let ${\boldsymbol{G}}$ be a Carnot group of step two with Lie algebra $\mathfrak g = V_1 \oplus V_2$. We say that a function $U:{\boldsymbol{G}}\to \mathbb R$ has \emph{partial symmetry} \index{partial symmetry} (with respect to a point $g_o\in {\boldsymbol{G}}$) if there exists a function $u:[0,\infty)\times V_2 \to \mathbb R$ such that for every $g = \exp(x(g) + y(g)) \in {\boldsymbol{G}}$ one has $\tau_{g_o}\ U (g)= u(|x(g)|, y(g)).$ A function $U$ is said to have \emph{cylindrical symmetry} \index{cylindrical symmetry} (with respect to $g_o\in {\boldsymbol{G}}$) if there exists $\phi:[0,\infty)\times [0,\infty) \to \mathbb R$ for which $\tau_{g_o}\ U(g)= \phi(|x(g)|, |y(g)|),$ for every $g\in {\boldsymbol{G}}$. \end{dfn} The proof of Theorem \ref{T:CS} due to \cite{GV} consists of two steps, first one shows that any entire solution with partial symmetry is cylindrically symmetric and then that all entire solutions with cylindrical symmetries. The proof of the first result relies {on} an adaption of the method of moving hyper-planes due to Alexandrov \cite{Al} and Serrin \cite{S4}. The moving plane technique was developed further in the two celebrated papers \cite{GNN}, \cite{GNN2} by Gidas, Ni and Nirenberg to obtain symmetry for semi-linear equations with critical growth in $\mathbb R^n$ or in a ball. {The proof of \cite{GV}} incorporate{s also} some important simplification of the proof in \cite{GNN2} due to Chen and Li \cite{CL}. We mention that a crucial role is played by the knowledge of the explicit solutions \eqref{e:FS extremal fns} and also by the inversion and the related Kelvin transform introduced by Kor\'anyi for the Heisenberg group \cite{Ko1}, and subsequently generalized to groups of Heisenberg type in \cite{CK}, \cite{CDKR}, see also \cite{GV2} for properties of the Kelvin transform. The proof of the second main result has been strongly influenced by the approach of Jerison and Lee for the Heisenberg group, see Theorem 7.8 in \cite{JL2}. After a change in the dependent variable, which relates the Yamabe equation to a new non-linear pde in a quadrant of the Poincar\'e half-plane, one is led to prove that the only positive solutions of the latter are quadratic polynomials of a certain type. Besides {ideas from} Jerison and Lee's paper, the proof {of Theorem \ref{T:CS} in \cite{GV}} has some features of the method of the so-called $P$-\emph{functions} introduced by Weinberger in \cite{W}. Given a solution $u$ of a certain partial differential equation, such method is based on the construction of a suitable non-linear function of $u$ and $grad\ u$, a $P$-function, which is itself solution (or sub-solution) to a related partial differential equation, and therefore satisfies a maximum principle. In fact, starting with a cylindrical solution $U$ of the Yamabe equation \ref{Yamabeomegasymm}, the function $\phi = v^{-4/(Q-2)}$ where $v=\left(\frac{Q-2}{4}\right)^{-(Q-2)/2}U$ satisfies \begin{equation}\label{YPhi} \mathcal L \phi= (\frac{Q-2}{4} + 1)\ \frac{|X\phi|^2}{\phi}+ \frac{Q-2}{4}. \end{equation} By the cylindrical symmetry assumption $\Phi$ is a function of the variables \begin{equation}\label{coord} y= \frac{|\xi_1|^2}{4}, \quad\quad x= |xi_2|, \end{equation} which satisfies the equation \begin{equation}\label{Yfinal} \Delta \phi= \frac{n + 2}{2}\ \frac{|\nabla \phi|^2}{\phi}- \frac{a}{x}\ \phi_x- \frac{b}{y}\ \phi_y+ \frac{n}{2 y} , \end{equation} in $\Omega = \{(x,y)\in \mathbb R^2 \mid x>0, y>0 \}$ with $a= k - 1\ \geq\ 0$, $b= \frac{m}{2}\ \geq\ 1$ and $ n= a + b\ \geq 1$. The case $k=1$ corresponds to the Heisenberg group $\mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}$, and it was considered earlier in \cite{JL2}. A long calculation shows that with $h = x^a y^b \phi^{-(n+1)}$, $$F=2<\nabla \phi, \nabla \phi_x>- 2\frac{n}{2b}\ \phi_{xy}-\phi_x\ \frac{|\nabla \phi|^2}{\phi}\quad \text{ and }\quad G=- 2<\nabla \phi, \nabla \phi_y>+ 2 \frac{n}{2b}\ \phi_{yy}+(\phi_y - \delta)\ \frac{|\nabla \phi|^2}{\phi},$$ the following identity holds true \begin{multline*} (h F)_x- (h G)_y =\ h\ \bigg\{\bigg[ 2\ ||\nabla^2 \phi||^2- (\Delta \phi)^2 \bigg] +\ \frac{n+2}{n}\ \left(\Delta \phi- \frac{|\nabla \phi|^2}{\phi} \right)^2 + \frac{2ab}{n}\ \bigg(\frac{\phi_x}{x} - (\frac{\phi_y}{y}- \frac{n}{2by}) \bigg)^2\bigg\}. \end{multline*} An integration over the first quadrant, noting that the integrals are finite as a consequence of the properties of the Kelvin transform on a group of Iwasawa type, we obtain \begin{equation}\label{positivity} 2\ ||\nabla^2 \phi||^2= (\Delta \phi)^2, \quad \Delta \phi- \frac{|\nabla \phi|^2}{\phi}= 0, \quad \frac{\phi_x}{x}= \frac{\phi_y}{y}- \frac{n}{2by} . \end{equation} We remark that the Kelvin transform allows us to find the asymptotic behavior of every solution of the Yamabe equation, including all its derivatives. The behaviour at infinity of a finite energy solution can be found in more general settings with the method of \cite{LU}. From the first two equations in \eqref{positivity} we conclude (see, e.g., \cite{W} or also \cite{JL2}) that $\phi$ must be of form \begin{equation}\label{sym1} \phi(x,y)= A^2\ (x^2+ y^2)+ 2 A \alpha x+ 2 B \beta y+ \alpha^2+ \beta^2 \end{equation} for some numbers $A, B, \alpha$ and $\beta$, with $A^2 = B^2$. On the other hand, the third equation in \eqref{positivity} implies that $\alpha= 0$ and $ \beta= \frac{n}{4 b B}$. Recalling that $x = |\xi_2|, y = |\xi_1|^2/4$ one easily concludes from the above that \begin{equation}\label{JL} \phi(|\xi_1|,|\xi_2|)= \frac{A^2}{16}\ \left[(\frac{a + b}{b A^2} + |\xi_1|^2)^2+ 16 |\xi_2|^2 \right] \end{equation} for some $A \not = 0$, hence \begin{equation} \phi(|\xi_1|,|\xi_2|)= \frac{Q-2}{16 m \epsilon^2}\ [(\epsilon^2 + |\xi_1|^2)^2+ 16 |\xi_2|^2] \end{equation} where $\epsilon^2 = \frac{Q-2}{m A}$. Finally, the relation between $\Phi$ and $U$, we obtain \[ U(g)= C_\epsilon\ ((\epsilon^2 + |x(g)|^2)^2+ 16 |y(g)|^2)^{-(Q-2)/4}, \] with $C_\epsilon = [m(Q-2)\epsilon^2]^{(Q-2)/4}$. All other cylindrically symmetric solutions are obtained from this one by left-translation, which completes the proof of Theorem \ref{T:CS}. We remark that \eqref{Yfinal} was used in \cite{V11}, see also \cite{MFS}, to establish the sharp constant and the extremals in a $L^2$ Hardy-Sobolev inequality involving distance to a lower dimensional subspace. \subsection{The best constant in the $L^2$ Folland-Stein inequality on the quaternionic Heisenberg groups} In this section we explain the ideas behind the proof of Theorem \ref{T:Iwasawa groups Minimizers}. The proof relies on the realization made in \cite{BFM} and used more recently in \cite{FrLi} that the "center of mass" idea of Szeg\"o \cite{Sz} and Hersch \cite{He} can be used to find the sharp form of (logarithmic) Hardy-Littlewood-Sobolev type inequalities on the Heisenberg group. This method does not give all solutions of the Yamabe equation on the Iwasawa group, but is enough to determine the best constant. The Cayley transform and the conformal nature of the problem are crucial for its solution. Another key is Theorem~\ref{t:first eigenspace Iwasawa} which will be used to see that the constants are the only minimizers on the sphere among all positive local minimizers which viewed as densities place the center of mass of the sphere at the origin. \emph{We shall focus here on the qc case \cite{IMV2} but the argument is valid in any of the groups of Iwasawa type using the just mentioned facts{, see also \cite{ChLZh1,ChLZh2}}.} Let ${\tilde\eta}$, cf. \eqref{e:stand cont form on S}, be the standard qc structure on the unit sphere $S^{4n+3}$. Szeg\"o and Hersch's center of mass method suggests the following lemma. \begin{lemma}\label{l:hersch} For every $v\in L^1(S^{4n+3})$ with $\int_{S^{4n+3}} v \ Vol_{\tilde\eta}=1$ there is a quaternionic contact conformal transformation $\psi:(S^{4n+3}, \tilde\eta)\rightarrow (S^{4n+3}, \tilde\eta)$ such that $$\int_{S^{4n+3}} \psi\, v \ Vol_{\tilde\eta} =0.$$ \end{lemma} \begin{proof} Fix a point $P\in S^{4n+3}$ on the quaternionic sphere and denote by $N$ its antipodal point and consider the local coordinate system near $P$ defined by the Cayley transform $\mathcal{C}_N$ from $N$, see \eqref{e:QC Cayley}. We know that $\mathcal{C}_N$ is a quaternionic contact conformal transformation between $ S^{4n+3}\setminus {N}$ and the quaternionic Heisenberg group, cf. \eqref{e:Cayley transf ctct form}. Notice that in this coordinate system $P$ is mapped to the identity of the group. For every $r$, $0<r<1$, let $\psi_{r,P}$ be the qc conformal transformation of the sphere, which in the fixed coordinate chart is given on the group by a dilation with center the identity by a factor $\delta_{r}$. If we select a coordinate system in $\mathbb{R}^{4n+4}=\mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}\times\mathbb{H}$ so that $P=(1,0)$ and $N=(-1,0)$. Applying the Cayley transform \eqref{e:QC Cayley} to $(q^*,p^*)=\psi_{r,P}(q,p)$ we have \begin{equation*} \begin{aligned} q^* & =2r\left ( 1 + r^2(1+p)^{-1} (1-p) \right )^{-1} \left ( 1+p \right ) q\\ p^* & =\left ( 1 +r^2(1+p)^{-1}(1-p) \right )^{-1} \left ( 1-r^2(1+p)^{-1}(1-p) \right ), i.e, \end{aligned} \end{equation*} Consider the map $\Psi: B\rightarrow \bar B$, where $B$ ( $\bar B$ ) is the open (closed) unit ball in $\mathbb{R}^{4n+4}$, by the formula $$\Psi(rP)=\int_{S^{4n+3}} \psi_{1-r,P}\, v\ Vol_{\tilde\eta}.$$ Notice that $\Psi$ can be continuously extended to $\bar B$ since for any point $P$ on the sphere, where $r=1$, we have $\psi_{1-r,P}(Q)\rightarrow P$ when $r\rightarrow 1$. In particular, $\Psi=id$ on $ S^{4n+3}$. Since the sphere is not a homotopy retract of the closed ball it follows that there are $r$ and $P\in S^{4n+3}$ such that $\Psi(rP)=0$, i.e., $\int_{S^{4n+3}} \psi_{1-r,P}\,v\ Vol_{\tilde\eta}=0$. Thus, $ \psi=\psi_{1-r,P}$ has the required property. \end{proof} In the next step one proves that there is a minimizer of the Folland-Stein inequality which satisfies the zero center of mass condition. A number of well known invariance properties of the Yamabe functional are exploited. For the rest of the Section, given a qc form $\eta$ and a function $u$ we will denote by $\nabla^{\eta}u$ the horizontal gradient of $u$. We shall call a (positive) function $u$ on the sphere a \textit{well centered} function when viewing $u^{2^*}$ as a density it places the center of mass of the sphere at the origin, i.e., \begin{equation}\label{e:zero mass} \int_{S^{4n+3}} P\, u^{2^*}(P) \, Vol_{\tilde\eta}=0, \qquad P\in \mathbb{R}^{4n+4}=\mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}\times\mathbb{H}. \end{equation} For the next Lemma recall the functionals $\mathcal{E}_{{\tilde\eta}}$ and $\mathcal{N}_{{\tilde\eta}}$ introduced in \eqref{e:E and N functional}. \begin{lemma}\label{l:zero mass is enough} Let $v$ be a smooth positive function on the sphere with $\mathcal{N}_{{\tilde\eta}}(u) =1$. There is a well centered smooth positive function $u$ such that $\mathcal{E}_{{\tilde\eta}}(u)=\mathcal{E}_{{\tilde\eta}}(v)$ and $\mathcal{N}_{{\tilde\eta}}(u) =1$. In particular, the Yamabe constant \eqref{e:Yamabe constant Iwasawa} is achieved for a positive function $u$ which is well centered, i.e., for a function $u$ satisfying \eqref{e:zero mass}. \end{lemma} \begin{proof} Given a positive function $v$ on the sphere $\int_{S^{4n+3}} v^{2^*}\, Vol_{\tilde\eta}=1$, consider the function \begin{equation}\label{e:zero mass transform} u=\phi^{-1}(v\circ \psi^{-1}), \end{equation} where $\psi$ is the qc conformal map of Lemma \ref{l:hersch}, $\eta\equiv(\psi^{-1})^*\tilde\eta$, and $\phi$ is the corresponding conformal factor of $\psi$. The claim of the Lemma follows directly from the conformal invariance \eqref{e:conf Yamabe volume}. \end{proof} The next step shows that a well centered minimizer has to be constant. \begin{lemma}\label{l:const mnimizer} If $u$ is a well centered local minimum of the problem \eqref{e:Yamabe constant Iwasawa} for $M=(S^{4n+3},{\tilde\eta})$, then $u\equiv const$. \end{lemma} \begin{proof} Let $\zeta$ be a smooth function on the sphere $S^{4n+3}$. Recalling \eqref{e:E and N functional}, with the help of the divergence formula \eqref{div} we obtain the formula \begin{equation}\label{e:Upsilon for zeta u} \mathcal{E} (\zeta u) = \int_{S^{4n+3}}\zeta^2 \Bigl(4\frac {Q+2}{Q-2}\ \lvert \nabla^{{\tilde\eta}} u \rvert^2 + \tilde{S}\, u^2\Bigr)\, {Vol}_{\tilde\eta} - 4\frac {Q+2}{Q-2}\int_{S^{4n+3}} u^2 \zeta\, {\tilde\triangle} \zeta \, {Vol}_{\tilde\eta}. \end{equation} At this point we let $\zeta$ be an eigenfunction corresponding to the first eigenvalue of the sub-Laplacian $\tilde\triangle$ associated to ${\tilde\eta}$, $\tilde\triangle \zeta =-\lambda_1 \zeta$. Remarkably, the first eigenspace of the standard sub-Laplacian is spanned by restrictions to the sphere of the linear (coordinate functions) in $R^{4n+4}=\mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}\times\mathbb{H}$, see Theorem \ref{t:first eigenspace Iwasawa}. Computing the second variation $\delta^2 \Upsilon(u)v = \frac {d^2}{dt^2} \Upsilon(u+tv)_{|_{t=0}}$ of $\Upsilon(u)$ we see that the local minimum condition $\delta^2 \Upsilon (u)v\geq 0$ implies \begin{equation*} \mathcal{E} ( v) -(2^*-1)\mathcal{E} ( u)\int_{S^{4n+3}} u^{2^*-2}v^2\ Vol_{\tilde\eta} \ \geq 0 \end{equation*} for any function $v$ such that $\int_{S^{4n+3}} u^{2^*-1}v\ Vol_{\tilde\eta}=0$. Therefore, for $\zeta$ being any of the coordinate functions in $\mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}\times\mathbb{H}$ we have (taking $v=\zeta u$ and recalling that $u$ is well centered) $$ \mathcal{E} (\zeta u) -(2^*-1)\mathcal{E} ( u)\int_{S^{4n+3}} u^{2^*}\zeta^2\ Vol_{\tilde\eta} \ \geq 0, $$ which after a summation over all coordinate functions and a use of \eqref{e:Upsilon for zeta u} gives \begin{equation*} \mathcal{E}(u) - (2^*-1)\mathcal{E} ( u) +4\lambda_1(2^*-1) \int_{S^{4n+3}} u^2 \ Vol_{\tilde\eta}\geq 0, \end{equation*} which implies, recall $2^*-1=( {Q+2})/({Q-2})$, \begin{equation*} 0\leq 4(2^*-1)\left ( 2^*-2\right)\int_{S^{4n+3}} | \nabla^{\tilde\eta} u|^2\ Vol_{\tilde\eta} \leq \ \left (4\lambda_1(2^*-1) -\left (2^*-2\right )\tilde S \right ) \int_{S^{4n+3}} u^{2^*}\ Vol_{\tilde\eta}. \end{equation*} By Theorem a \ref{t:first eigenspace Iwasawa} we have actually equality $\lambda_1= {\tilde S}/{(Q+2)}$, hence $| \nabla^{\tilde\eta} u|=0$, which completes the proof. \end{proof} After these preliminaries we turn to the proof of Theorem \ref{T:Iwasawa groups Minimizers}. \begin{proof}[Proof of Theorem \ref{T:Iwasawa groups Minimizers}] Let $F$ be a minimizer (local minimum) of the Yamabe functional $\mathcal{E}$ on $\boldsymbol {G\,(\mathbb{H})}$ and $f$ the corresponding function on the sphere defined with the help of the Cayley transform by \begin{equation}\label{e:g def} f=\mathcal{C}^*(F\Phi^{-1}), \end{equation} where $\Phi$ is a solution of the Yamabe equation on $\boldsymbol {G\,(\mathbb{H})}$ defined in \eqref{e:h and Phi}. By the conformality of the qc structures on the group and the sphere we have by \eqref{e:vol conf change} $Vol_{\Theta}=\Phi^{2^*} Vol_{\tilde\Theta}$, hence $F^{2^*} Vol_{\tilde\Theta} = f^{2^*} \phi^{-2^*} Vol_{\tilde\eta}$, where $\phi=\mathcal{C}^*(\Phi)$. This, together with the Yamabe equation implies that the Yamabe integral is preserved \begin{equation}\label{e:invariance of dirichlet} \int_{\boldsymbol {G\,(\mathbb{H})}}\ a|\nabla^{\tilde\Theta} F|^2\ Vol_{\tilde\Theta}\ =\ \int_{S^{4n+3}}\ \left ( a|\nabla^{\tilde\eta} f|^2 + {\tilde S} f^2 \right )\, {Vol}_{\tilde\eta} \end{equation} where $a=4(Q+2)/(Q-2)$. By Lemma \ref{l:zero mass is enough} and \eqref{e:zero mass transform} the function $f_0=\phi^{-1}(f\circ \psi^{-1})$ will be well centered minimizer (local minimum) of the Yamabe functional $\Upsilon$ on $S^{4n+3}$. The latter claim uses also the fact that the map $v\mapsto u$ of equation \eqref{e:zero mass transform} is one-to-one and onto on the space of smooth positive functions on the sphere. Now, from Lemma \ref{l:const mnimizer} we conclude that $g_o=const$. Looking back at the corresponding functions on the group we see that \begin{equation* F_0 =\gamma\,{\left [(1+|q'|^2)^2+ |\omega'|^2 \right ]}^{-(Q-2)/4} \end{equation*} for some $\gamma=const.>0.$ Furthermore, the proof of Lemma \ref{l:hersch} shows that $F_0$ is obtained from $F$ by a translation \eqref{translation} and dilation \eqref{scaling}. \begin{rmrk} We remark that the above argument shows that any local minimum of the Yamabe functional $\Upsilon$ on the sphere (or the Iwasawa group) has to be a global one. \end{rmrk} The Yamabe constant of the sphere is calculated immediately by taking a constant function in the Yamabe functional and a use of \eqref{e:S of standard qc sphere}. The remaining part of the proof (the value of the best constant $S_2$) is quite straightforward. Since it involves mainly calculations depending on the chosen normalization of the contact form we refer to \cite[Section 6.7]{IV2} for the details. This completes the proof of Theorem \ref{T:Iwasawa groups Minimizers}. \end{proof} \begin{rmrk}\label{e:constant for Iwasawa basis} One should keep in mind that the the standard basis \eqref{qHh} is not an orthonormal basis which turns the group $\boldsymbol {G\,(\mathbb{H})}$ into a group of H-type, cf. also \eqref{e:H-type basis for Hn} and the paragraph above it. The two constants differ by a multiple of $4^{-k}$ in the general case of a group of Iwasawa type with center of dimension $k$. For more details on the relation between the Haar measure and the volume form associated to the contact form, as well as the exact relation between the best constants computed with respect to different bases {see} \cite[p. 188--189]{IV2}. \end{rmrk} \section{Sub-Riemannian geometry as conformal infinities} \subsection{Riemannian case} Let $(N,h)$ be a Riemannian manifold with boundary $M=\partial N$ with defining function $r >0$ on interior of $N $ which vanishes of order one on $M$. Suppose that $r\cdot h$ extends continuously to $M$ thus defining a "conformal structure" on the boundary $M$. Fefferman \& Graham \cite{FeGr85} reversed the construction {and} used "canonical asymptotically hyperbolic (AH) filling" metrics to obtain conformal invariants. This is of interest also because of the AdS/CFT correspondence in physics relating gravitational theories on $N$ with conformal theories on $M$. More specifically, if one can associate to a conformal class on $M$ a “canonical” AH filling, then the Riemannian invariants for the interior metric give conformal invariants of the boundary structure. For a basic example, consider on the open unit ball $B$ in $\mathbb{R}^n$ the hyperbolic metric $$h=\frac {4}{\rho^2}g_{euc}, \qquad \rho=1-|x|^2.$$ The conformal infinity is the conformal class of $g_{euc}\vert_{\partial B}$ - the standard metric on the unit sphere. Graham \& Lee \cite{GrL91} gave the first general examples of AH Einstein metrics. The idea has been very useful especially due to the {mentioned} relation with the AdS/CFT correspondence \cite{Mal98} in physics. \subsection{Conformal Infinities and Iwasawa Sub-Riemannian geometries} \label{s:Iwasawa sub-Riem geom} The main references here are \cite{Biq1,Biq2} where the sub-Riemannian structures and geometries on the spheres at infinity of the hyperbolic spaces were used as model spaces for a wide class of sub-Riemannian structures which we shall call Iwasawa Sub-Riemannian geometries. As a motivation we start with a few examples based on the real, complex and quaternion hyperbolic cases which. An explicit description of the octonian hyperbolic plane and the ball model can be found in \cite{M} while \cite{Biq1} is the reference for the corresponding conformal infinity. On the open unit ball $B$ in $\mathbb{C}^{n+1}$ consider the Bergman metric $$h\ =\ \frac {1}{\rho}g_{euc}\ +\ \frac{1-\rho}{\rho^2}\left ( (d\rho)^2+(Id\rho)^2\right ), \qquad \rho\ =\ 1-|x|^2.$$ Notice that as $\rho\rightarrow 0$ we have that $\rho\cdot h$ is finite only on $H$-the so called \emph{horizontal space}, $H\ = \ Ker\, (I\,d\rho),$ which is the kernel of the contact form $\theta\ =\ I \,d\rho.$ The conformal infinity of $\rho\cdot h$ is the conformal class of a pseudohermitian CR structure defined by $H$ and $\theta$. If we look for K\"ahler-Einstein deformations Cheng \& Yau \cite{ChYa80} showed that any smooth (in fact $C^2$) strictly pseudoconvex domain in $\mathbb{C}^{n+1}$ admits a unique complete K\"ahler-Einstein metric of Ricci curvature $-1$ which is asymptotic to the CR-structure of the boundary, see also \cite{MoYau83} for an extension to an arbitrary bounded domain of holomorhy. In the quaternion case, consider the open unit ball $B$ in $\mathbb{H}^{n+1}$ consider the hyperbolic metric $h=\frac {1}{\rho}g_{euc}+\frac{1}{4\rho^2}\left ( (d\rho)^2+(I_1d\rho)^2+(I_2d\rho)^2+(I_3d\rho)^2\right ).$ The conformal infinity is the conformal class of a quaternionic contact structure. In fact, $\rho h$ defines a conformal class of degenerate metrics with kernel $$H=\cap_{j=1}^3 Ker\, (I_j\,d\rho).$$ Biquard showed that the infinite dimensional family \cite{LeB91} of complete quaternionic-K\"ahler deformations of the quaternion hyperbolic metric have conformal infinities which provide an infinite dimensional family of examples of qc structures. Conversely, according to \cite{Biq1} every real analytic qc structure on a manifold $M$ of dimension at least eleven is the conformal infinity of a unique quaternionic-K\"ahler metric defined in a neighborhood of $M$. Finally, \cite{Biq1} considered CR and qc structures as boundaries of infinity of Einstein metrics rather than only as boundaries at infinity of K\"ahler-Einstein and quaternionic-K\"ahler metrics, respectively. In fact, \cite{Biq1} showed that in each of the three cases (complex, quaternionic, octoninoic) any small perturbation of the standard Carnot-Carath\'eodory structure on the boundary is the conformal infinity of an essentially unique Einstein metric on the unit ball, which is asymptotically symmetric. Various explicit examples of qc structures were constructed in \cite{AFISUV1}. In the above examples the geometry at the conformal infinity is asymptotic to the hyperbolic geometry of the corresponding symmetric space of noncompact type of real rank one $G/K$, see paragraphs after Theorem \ref{t:Iwasawa is J2}. The corresponding geometries at infinity are conformal metrics, CR structures, quaternionic-contact structures or octonionic contact structures as we define below following \cite{Biq1,Biq2}. The symmetric case {belongs to the} “parabolic geometries” modelled on $G/P$, where $P$ is a minimal parabolic subgroup of $G$, see \cite{CS09}. We mention another class of asymptotic geometries considered in \cite{ABiq10} which are no {longer} asymptotic to a symmetric space, but the model at infinity is a homogeneous Einstein space, which m\emph{ay vary from point to point} on the boundary at infinity. Such a construction is motivated by Heber \cite{He98} who showed that every deformation of the solvable group $S = NA$ carries a unique homogeneous Einstein metric. Thus, deformations of the nilpotent Lie algebra $\mathfrak n$ give a homogeneous Einstein metric on the corresponding solvable group $S$. Leaving the real case aside, we turn to the precise definition of the general (sub-Riemannian) geometric setting of the above constructions. Let $G$ be one of the groups $ U(n)$, $Sp(n)Sp(1)$ or $Spin(7)$, corresponding to the complex, quaternionic or octonionic cases, respectively, recalling the homogeneous models of the corresponding boundary spheres of the hyperbolic space, namely, $ S^{2n+1}=U(n+1)/U(n)$, $S^{4n+3}= Sp(n+1)Sp(1)/Sp(n)Sp(1)$ or $S^7=Spin(9)/Spin(7)$. Let $M$ be a manifold with a 1-form $\eta$ with values in $\mathbb{R},\mathbb{R}^3$ or $\mathbb{R}^7$, respectively, whose kernel $H = Ker\, \eta$ - the so called \emph{horizontal distribution} - is of co-dimension $k=1,\ 3$ or $7$, respectively. Following Biquard \cite{Biq1}, a Carnot-Carath\'{e}odory metric (positive definite symmetric two tensor) compatible with $d\eta$ is defined to be a metric $g$ on $H$ such that: \begin{enumerate}[i)] \item in the complex case, the restriction $\omega=\frac 12 d\eta\vert_H$ is a symplectic form on $H$ compatible with $g$, i.e., $\omega(\cdot,\cdot) = g(I\cdot, \cdot)$ where $I$ is an almost complex structure on $H$; \item in the quaternionic case, the three 2-forms $\omega_i=\frac 12 d\eta_i\vert_H$, $i=1,2,3$, on $H$ are the fundamental forms of a quaternionic structure compatible with $g$, i.e., $\omega_i(\cdot,\cdot) = g(I_i\cdot, \cdot)$ for almost complex structures $I_i$ satisfying the quaternionic commutation relations; \item in the octonionic case, the seven 2-forms $\omega_i=\frac 12 d\eta_i\vert_H$, $i=1,\dots,7$ on $H$ provide a $Spin(7)$ structure compatible with $g$, i.e., $\omega_i(\cdot,\cdot) = g(I_i\cdot, \cdot)$ for almost complex structures $I_i$ satisfying the octonionic commutation relations. \end{enumerate} We shall call the geometric structures above \emph{Iwasawa sub-Riemannian geometries} since at every point the osculating nilpotent group \cite{FS}, \cite{RS76} is isomorphic to the corresponding (non-degenerate) Iwasawa group. For simplicity, the above definition of CR, qc and octonionic contact structures requires the existence of a global 1-form defining $H$. The obstructions to global existence of such a form in the CR and qc cases are the first Stiefel-Whitney class and the first Pontryagin class of $M$, respectively. The complex case defines a strictly pseudoconvex almost CR manifold with a fixed pseudo-Hermitian structure, which is a CR structure when the integrability condition $[IX,IY]-[X,Y]\in H$ for $X,Y\in H$ holds. In the quaternionic and octonionic cases, a distribution $H$ for which a Carnot-Carath\'{e}odory $H$-metric exists will be called a quaternionic contact structure or an octonionic contact structure. The focus here will be mainly the CR and qc cases. Note that the topological dimensions of these manifolds are $2n+1$ and $4n+3$, respectively. The so called \emph{homogeneous dimension} of $M$ is $Q=m+ 2k$ where $m=\text{dim } H$ and $k=\text{codim } H$. {W}e shall denote with $\, Vol_{\eta}$ the volume form $$\, Vol_{\eta}=\eta\wedge(d\eta)^n\ \quad \text{and }\quad \, Vol_{\eta}=\eta_1\wedge\eta_2\wedge\eta_3\wedge\Omega^n,$$ in the CR ($n=m/2$) and qc ($n=m/4$) case respectively, where $\Omega=\omega_1\wedge\omega_1+\omega_2\wedge\omega_2+\omega_3\wedge\omega_3$ is the \emph{fundamental 4-form}. There is a Riemannian metric on $M$ obtained by extending in a natural way the horizontal metric $g$ to a true Riemannian metric, denoted by $h$, explicitly given by \begin{equation}\label{hmetric} h=g+\sum_{i=1}^k(\eta_i)^2. \end{equation} The Riemannian volume form is up to a constant multiple the just defined volume form $\, Vol_{\eta}$. For each of the considered geometries there is a canonically defined connection $\nabla=\nabla^{\eta}$ with torsion $T$. In the integrable CR case this is the Tanaka-Webster connection, see \cite{Ta62}and \cite{We2}. In the qc and octonionic cases this is the Biquard connection, see \cite{Biq1,Biq2} and \cite{D}. The curvature tensor of the corresponding canonical connection $\nabla$ and the associated (0,4) tensor, which is denoted with the same letter, are \begin{equation}\label{e:curv tensor} {R}(A,B)C=[\nabla_A, \nabla_B]C- \nabla_{[A,B]}C, \qquad R(A,B,C,D)\overset{def}{=}h(R(A,B)C,D). \end{equation} Let $\{e_1,\dots,e_{m}\}$, $m=\dim H$, be a local orthonormal basis of the \emph{horizontal space} $H$, $g(e_a,e_b)=\delta_{ab}$. The Ricci type and scalar curvature tensors are obtained by taking \emph{horizontal traces} \begin{equation} \label{qscs} Ric(A,B)=\sum_{b=1}^m R(e_b,A,B,e_b),\quad S=\sum_{a,b=1}^m R(e_b,e_a,e_a,e_b), \ A,B\in T(M), \end{equation} which manifests the sub-Riemannian nature of these tensors. In the qc case these tensors are called \emph{qc-Ricci tensor} and \emph{qc-scalar curvature} tensor of the Biquard connection. The (horizontal) divergence of a horizontal vector field/one-form $% \sigma\in\Lambda^1\, (H)$ defined by $\nabla^*\, \sigma\ =tr^g\nabla\sigma= \nabla \sigma(e_a,e_a)$ supplies the "integration by parts" over compact $M$ formula, \cite{Ta62}, \cite{IMV}, see also \cite{Wei}, \begin{equation} \label{div} \int_M (\nabla^*\sigma)\, Vol_{\eta}\ =\ 0. \end{equation} \subsubsection{The first eigenvalue on the sphere}\label{ss:eigenspace Iwasawa} \begin{thrm}\label{t:first eigenspace Iwasawa} The eigenspaces of the first eigenvalue of the sub-Laplacian of the canonical Iwasawa sub-Riemannian structures on the spheres at infinity of the hyperbolic spaces are the restrictions of all real-linear functions in the corresponding Eucliden space to the sphere. \end{thrm} The exact value of the eigenvalue depends on the normalization of the "standard" form $\eta$, which will be made explicit later. Of course, in the real case the eigenspace is the space of spherical harmonics of order one. Various proofs of Theorem \ref{t:first eigenspace Iwasawa} are possible. The simplest proof is a direct computation based on the explicit definition of the corresponding sub-Laplacians, see \cite{FrLi,BFM} for the complex case, \cite[Lemma 2.3]{IMV2} for the quaternion, and \cite{ChLZh1} for the octonian cases. Alternatively, one can relate the sub-Laplacian to the corrsponding Laplace-Beltrami operator on the sphere, see \eqref{obsa} and \eqref{llex} for the complex and quaternion case. Finally, the result follows from an abstract approach as in \cite{ACD} where the corresponding "spherical harmonics" are studied. \subsubsection{The Yamabe problem on Iwasawa sub-Riemannian manifolds}\label{ss:Yamabe Iwasawa mnfld} \begin{dfn}\label{d:sub-Riemannian conf transf} The "conformal" class of $[\eta]$ consists of all 1-forms $\bar\eta=\phi^{4/(Q-2)} \Psi\eta$ for a smooth positive function $\phi$ and $\Psi\in SO(k)$ with smooth functions as entries. \end{dfn} In the CR case $\Psi\equiv 1$, while in the QC case $\Psi$ is an $SO(3)$ matrix with entries smooth functions. We note that the canonical connection is independent of $\Psi$, but depends on $\phi$, which brings us to the Yamabe type problems. The Yamabe functional is \begin{equation}\label{e:Yamabe functional def} \Upsilon_{[\eta]} (\phi) = \left ( \int_M\Bigl(4\frac {Q+2}{Q-2} \lvert \nabla \phi \rvert^2 + S\, \phi^2\Bigr) Vol_\eta \right ) \Big / \left ( \int_M \phi^{2^*}\ Vol_\eta \right )^{2/2^*}, \end{equation} where $2^*=2Q/(Q-2)$. $\nabla=\nabla^\eta$ is the connection of $\eta$, $S$ is the scalar curvature \eqref{qscs} of $(M,\, \eta)$ and $|\nabla \phi|=\left (\sum_{a=1}^m (d\phi(e_a))^2 \right)^{1/2}$ is the length of the horizontal gradient. It will be useful to introduce the functionals \begin{equation}\label{e:E and N functional} \mathcal{E}_{\eta}(\phi)\overset{def}{=}\int_{M} \Bigl(4\frac {Q+2}{Q-2}\ \lvert \nabla^{\eta} \phi\rvert^2\ +\ {S}\, \phi^2\Bigr)\, Vol_{\eta},\hskip.4in \mathcal{N}_{\eta}(\phi)=\left ( \int_{M} \phi^{2^*}\, Vol_{\eta}\right)^{2/2^*}, \end{equation} hence the Yamabe functional can be written as $\Upsilon(\phi){=} \mathcal{E}(\phi) / N(\phi)$ (dropping the subscript $\eta$ when there is no confusion). The \emph{Yamabe constant} is \begin{equation}\label{e:Yamabe constant Iwasawa} \Upsilon(M,[\eta]) = \inf \{ \Upsilon_{[\eta]}(\phi):\, \phi\in \mathcal{C}^\infty (M) \}= \inf \{\mathcal{E}_{\eta}(\phi):\, \mathcal{N}_{\eta}(\phi)=1,\ \phi\in \mathcal{C}^\infty (M) \}. \end{equation} In the above notation we tacitly introduced $[\eta]$ as a subscript. The reason for this notation is that the Yamabe functional is conformally invariant, which follows from the formulas relating the (sub-Riemannian) scalar curvatures of the associated to $\eta$ and $\bar\eta$ connections, see below, together with the formula for the change of the volume form, \begin{equation}\label{e:vol conf change} \, Vol_{\eta}=\phi^{2^*}\, {Vol}_{\tilde\eta}. \end{equation} Finding the Yamabe constant in the case of the standard Iwasawa sub-Riemannian structures on the unit spheres is equivalent to the problem of determining the best constant in the $L^2$ Folland \& Stein \cite{FS} Sobolev type embedding inequality on the corresponding Heisenberg group. As noted earlier the best constant in the $L^2$ Folland \& Stein inequality together with the minimizers were determined recently in \cite{FrLi,IMV2,ChLZh1} by the method of Frank \& Lieb \cite{FrLi}, see also \cite{BFM}. Nevertheless this simpler approach does not yield the conjectured uniqueness (up to an automorphism) in the case of the spheres. Finding the {Yamabe constant } is closely related to the \emph{Yamabe problem} which seeks all Iwasawa sub-Riemannian structures of constant scalar curvature conformal to a given structure $\eta$. In fact, taking the conformal factor in the form $\bar\eta=\phi^{4/(Q-2)}% \eta$ as we did above, a calculation (done separately for each of the cases) gives the following \emph{Yamabe equation}, \begin{equation}\label{e:conf Yamabe} \mathcal{L} \phi\overset{def}{=} 4\frac {Q+2}{Q-2}\ \triangle \phi -\ S\, \phi \ =\ - \ \overline{S}\,\phi^{2^*-1}, \end{equation} where $\triangle $ is the horizontal sub-Laplacian, $\triangle \phi\ =\ tr^g_H\,(\nabla du)$, $S$ and $\overline{S}$ are the scalar curvatures corresponding to the associated to $\eta$ and $\bar\eta$ canonical connections. A natural question is to find all solutions of the Yamabe equation \eqref{e:conf Yamabe}. As usual the two fundamental problems are related by noting that on a compact manifold $M$ with a fixed conformal class $[\eta]$ the Yamabe equation characterizes the non-negative extremals of the Yamabe functional. The operator $\mathcal{L} \phi$ in \eqref{e:conf Yamabe} is the so called conformal sub-Laplacian. Using the divergence formula \eqref{div} we can write equation \eqref{e:conf Yamabe} in the form \begin{equation}\label{e:conf Yamabe volume} \phi^{-1}v\, \mathcal{\bar L}(\phi^{-1}v)\ Vol_{\bar\eta}=v\mathcal{ L}(v)\ Vol_{\eta}, \end{equation} for any $v\in C^\infty(M)$, which makes explicit the conformal invariance. Here $\mathcal{\bar L}$ denotes the conformal sub-Laplacian associated to the canonical connection $\overline \nabla$ of $\bar\eta$. \subsection{CR Manifolds}\label{ss:CR geometry} A CR manifold is a smooth manifold $M$ of real dimension 2n+1, with a fixed n-dimensional complex sub-bundle $\mathcal{H}$ of the complexified tangent bundle $\mathbb{C}TM$ satisfying $\mathcal{H} \cap \overline{\mathcal{H}}=0$ and $[ \mathcal{H},\mathcal{H}]\subset \mathcal{H}$. If we let $H=Re\, \mathcal{H}\oplus\overline{\mathcal{H}}$, the real sub-bundle $H$ is equipped with a formally integrable almost complex structure $J$. We assume that $M$ is oriented and there exists a globally defined compatible contact form $\eta$ such that the \emph{horizontal space} is given by ${H}=Ker\,\eta.$ In other words, the hermitian bilinear form $g(X,Y)= 1/2 \,d\eta(JX,Y)$ is non-degenerate. The CR structure is called strictly pseudoconvex if $g$ is a positive definite tensor on $H$. For brevity we shall frequently use the term \emph{CR manifold} to refer to a strictly pseudoconvex pseudohermitian manifold. In other words, unless specified otherwise a CR manifold will be an \emph{integrable strictly pseudoconvex CR manifold with a fixed pseudohermitian stricture}. The almost complex structure $J$ is formally integrable in the sense that $([JX,Y]+[X,JY])\in {H}$ and the Nijenhuis tensor vanishes $N^J(X,Y)=[JX,JY]-[X,Y]-J[JX,Y]-J[X,JY]=0.$ A CR manifold $(M,\eta,g)$ with a fixed compatible contact form $\theta$ is called \emph{a pseudohermitian manifold}. In this case the 2-form $d\eta_{|_{{H}}}\overset{def}{=}2\omega$ is called the fundamental form. The contact form whose kernel is the horizontal space $H$ is determined up to a conformal factor, i.e., $\bar\theta=\nu\theta$ for a positive smooth function $\nu$, defines another pseudohermitian structure called pseudo-conformal to the original one. A Riemannian metric is defined in the usual way, written with a slight imprecision as $h=g +\eta^2$. The vector field $\xi$ dual to $\eta$ with respect to $g $ satisfying $\xi\lrcorner d\eta=0$ is called the Reeb vector field. \subsubsection{Invariant decompositions} As usual any endomorphism $\Psi$ of $H$ can be decomposed with respect to the complex structure $J$ uniquely into its $U(n)$-invariant $(2,0)+(0,2)$ and $(1,1)$ parts. In short we will denote these components correspondingly by $\Psi_{[-1]}$ and $\Psi_{[1]}$. Furthermore, we shall use the same notation for the corresponding two tensor, $\Psi(X,Y)=g(\Psi X,Y)$. Explicitly, $\Psi=\Psi=\Psi_{[1]}+\Psi_{[-1]}$, where \begin{equation}{\label{compc}} \Psi_{[1]}(X,Y)=% \frac {1}{2}\left [ \Psi(X,Y)+\Psi(JX,JY)\right ],\qquad \Psi_{[-1]}(X,Y)=% \frac {1}{2}\left [ \Psi(X,Y)-\Psi(JX,JY)\right ]. \end{equation} The above notation is justified by the fact that the $(2,0)+(0,2)$ and $% (1,1) $ components are the projections on the eigenspaces of the operator $\Upsilon =\ J\otimes J, \quad (\Upsilon \Psi) (X,Y)\overset{def}{=}\Psi (JX,JY)$ corresponding, respectively, to the eigenvalues $-1$ and $1$. Note that both the metric $g$ and the 2-form $\omega$ belong to the [1]-component, since $% g(X,Y)=g (JX,JY)$ and $\omega(X,Y)=\omega (JX,JY)$. Furthermore, the two components are orthogonal to each other with respect to $g$. The Tanaka-Webster connection \cite{Ta62,We2,W} is the unique linear connection $% \nabla$ with torsion $T$ preserving a given pseudohermitian structure, i.e., it has the properties that the almost complex structure $J$ and the contact form are parallel, $\nabla\xi=\nabla J=\nabla\eta=\nabla g=0$, and the torsion tensor is of pure type, i.e., for $X,\, Y\in H$ we have \begin{gather} \notag T(X,Y)=d\eta(X,Y)\xi=2\omega(X,Y)\xi,\\ \label{torha} \quad T(\xi,X)\in {H}, g(T(\xi,X),Y)=g(T(\xi,Y),X)=-g(T(\xi,JX),JY). \end{gather} The (Webster) torsion $A$ of the pseudohermitian manifold is the symmetric tensor $A\overset{def}{=}T(\xi,.):H\rightarrow H$. Clearly, equation \eqref{torha} shows that $A\in \Psi_{[-1]}$. It is also well known \cite{Ta62} that $A$ is the obstruction for a pseudohermitian manifold to be Sasakian. We recall that a contact manifold $(M,\eta)$ is Sasakian if its Riemannian cone $C=M\times \mathbb R^+$ with metric $t^2h+ dt^2$ is K\"ahler (see e.g. \cite{Bl75,BGN}). \subsubsection{Curvature tensors of the Tanaka-Webster connection} The curvature tensors are defined in a standard fashion using \eqref{e:curv tensor} and \eqref{qscs}, noting again that traces are taken only on the horizontal space. {The Ricci 2-form is defined by} \rho (A,B)=\frac 12\,R(A,B,e_a,Je_a). {The horizontal part of the Ricci 2-form is (1,1)-form with respect to $J$ and the first Bianchi identity implies} \begin{equation}\label{e:CR Ricci 2-from} \rho (X,JY)=\frac 12\,R(e_a,Je_a,X,JY). \end{equation} {The tensor $\rho(X,JY)\in \Psi_{[1]}$ is a symmetric tensor and is frequently also called the Webster Ricci tensor.} The CR Ricci tensor has the following type decomposition in $J$ invariant and skew-invariant forms \cite{Ta62}, \cite{We2}, see also \cite{DT} and \cite[Chapter 7]{IV2} \begin{equation}\label{e:CR Ric type decomp} Ric(X,Y)=\rho(JX,Y)+2(n-1)A(JX,Y). \end{equation} It is well known that a pseudohermitian manifold with a flat Tanaka-Webster connection is locally isomorphic to the (complex) Heisenberg group. For $n>1$ the vanishing of the horizontal part of the Tanaka-Webster connection implies the vanishing of the whole curvature. If $n=1$, in addition to the vanishing of the horizontal part of the curvature one needs also the vanishing of the pseudohermitian torsion to have zero curvature. \subsubsection{The Heisenberg group}\label{ss:Heis group} Given the ubiquitous role of the Heisenberg group $\boldsymbol {G\,(\mathbb{C})}\equiv\mathbb{C}^n\times \mathbb R$ in analysis and being the flat model of the considered CR geometries {(}the Tanaka-Webster connection coincides with the invariant flat connection on the group{) w}e shall write explicitly a number of formulas in this special setting. {Some of these formulas} will be made explicit in Section \ref{ss:qc geometry} for quaternionic contact structures in which case the quaternionic Heisenberg group will play the role of the flat model space. The group $\boldsymbol {G\,(\mathbb{C})}$ arises as the nilpotent part in the Iwasawa decomposition of the complex hyperbolic space. Thus, $\boldsymbol {G\,(\mathbb{C})}$ \index{Heisenberg group $\boldsymbol {G\,(\mathbb{C})}$}is a Lie group whose underlying manifold is $\mathbb{C}^n \times \mathbb{R}$ with group law given by \eqref{e:H-type Iwasawa groups} where for $z, z' \in \mathbb{C}^n$ we let $z\cdot z' = \sum_{j=1}^n z_j {z}'_j$. A (real) basis for the Lie algebra of left-invariant vector fields on $\boldsymbol {G\,(\mathbb{C})}$ is given by \begin{equation}\label{e:basis for Hn} X_j = \frac{\partial}{\partial x_j} + 2 y_j \frac{\partial}{\partial t}, \quad X_{n+j}\equiv Y_j = \frac{\partial}{\partial y_j} - 2 x_j \frac{\partial}{\partial t},\quad \xi=2\frac{\partial}{\partial t}, \quad j=1,...,n, \end{equation} with corresponding contact form \[\tilde\Theta=\frac 12dt+\sum_{j=1}^n(x_jdy_j-y_jdx_j)=\frac 12dt+\sum_{j=1}^n \Im(\bar z_jdz_j)\] Here, we have identified $z=x+iy \in \mathbb{C}^n$, with the real vector $(x,y)\in \mathbb{R}^{2n}$. Since $[X_j, Y_k] = - 4 \delta_{jk} \frac{\partial}{\partial t}$, the Lie algebra is generated by the system $X=\{X_1,...,X_{2n}\}$. The sub-Laplacian is $\mathcal{L} = \sum_{j=1}^{2n} X_j^2$ which is the real part of the Kohn complex Laplacian. In this case the exponential map is the identity and, as for any group of step two, we have the the parabolic dilations $\delta_\lambda(z,t)=(\lambda z, \lambda^2 t)$. The corresponding homogeneous dimension is $Q= 2n + 2$. In regards to the theory of groups of Heisenberg type, cf. Section \ref{s:FS inequality}, some care has to be taken when defining the scalar product which turns $\boldsymbol {G\,(\mathbb{C})}$ into a group of Heisenberg type. For example, the standard inner product of $\mathbb{C}^n \times \mathbb{R}$, i.e., the inner product in which the basis of left invariant vector fields given in \eqref{e:basis for Hn} is an orthonormal basis will not make the Heisenberg group $\boldsymbol {G\,(\mathbb{C})}$ a group of Heisenberg type. An orthonormal basis of an H-type compatible metric is given by, $j=1,...,n$, \begin{equation}\label{e:H-type basis for Hn} X_j = \frac{\partial}{\partial x_j} + 2 y_j \frac{\partial}{\partial t}, \ X_{n+j}\equiv Y_j = \frac{\partial}{\partial y_j} - 2 x_j \frac{\partial}{\partial t},\ T=\frac {1}{4}\frac{\partial}{\partial t} \end{equation} and homogeneous gauge \eqref{Hgauge} given by $N(z,t)\ =\ (|z|^4 + 16t^2)^{1/4}$, $|z|= \left (\sum_{j=1}^{n}(x_j^2+y_j^2)\right )^{1/2}$. \subsection{The CR sphere and the Cayley transform} The simplest CR manifolds are the three hyperquadrics in complex space, \cite{ChM} \begin{gather} Q_{+}: \ r=|z_1|^2+\dots+|z_n|^2 + |w|^2=1,\qquad Q_{-}: \ \ r=|z_1|^2+\dots+|z_n|^2 - |w|^2=-1\\ Q_{0}:\ \ r=|z_1|^2+\dots+|z_n|^2 -\Im(w)=0, \end{gather} where $(z_1,z_2,\dots z_n,w)\in \mathbb{C}^{n}\times \mathbb{C}$ with corresponding contact forms ${\tilde\eta}\overset{def}{=}\eta_{+}$, $\eta_{-}$ and $\tilde\Theta{=}\eta_0$ equal to $-i\partial r$ which define strictly pseudoconvex pseudohermitian structures. Of course, these are the "standard" (up to a multiplicative factor depending on the reference) pseudohermitian structures on the sphere $S^{2n+1}$, hyperboloid, and Heisenberg group $\mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}$, the latter identified with the boundary of the Siegel domain via the map $(z,t)\mapsto (z,t+i|z|^2)$. A transformation mapping $Q_{-}$ onto $Q_{+}$ minus a curve is given by $w=1/w'$ and $z_j=z_j'/w'$. On the other hand, the transformation $$\mathcal{C}(Z,W)=\Big(\frac {iZ}{1-W}, i\frac {1+W}{1-W} \Big), \qquad \text{with inverse}\qquad \mathcal{C}^{-1}(z,w)=\Big(\frac{2z}{ i+w},\frac{w-i}{w+i}\Big),$$ maps the sphere $S^{2n+1}\setminus(0,0,...,0,1)$ onto $\mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}$. The map $\mathcal{C}$ is the Cayley transform (with a pole at $(0,0,...,0,1)$). These transformations clearly preserve the CR structure since they are restrictions of holomorphic maps, but do not preserve the contact forms and are in fact pseudoconformal pseudohermitian maps \[\tilde\Theta=\frac 12dw-i\bar z \cdot dz=\frac {1}{|1-W|^2}{\tilde\eta}, \qquad {\tilde\eta}=-i \left (\overline WdW+ \bar Z\cdot dZ \right). \] \subsubsection{CR conformal flatness} A fundamental fact characterizing CR conformal flatness is the Cartan-Chern-Moser theorem \cite{Car,ChM,W}. A proof based on the classical approach used by H. Weyl in Riemannian geometry (see e.g. \cite{Eis}) can be found in \cite{IV2}, see also \cite{IVZ}. \begin{thrm}\protect[\cite{Car,ChM,W}]\label{crmain} Let $(M,\theta,g)$ be a 2n+1-dimensional { non-degenerate} pseudo-hermitian manifold. If $n>1$ then $(M,\eta,g)$ is locally pseudoconformaly equivalent to a hyperquadric in $\mathbb{C}^{n+1}$ if and only if the Chern-Moser tensor vanishes, $\mathcal{S}=0$, \cite{ChM,W}. In the case $n=1$, $(M,\eta,g)$ is locally pseudoconformaly equivalent to a hyperquadric in $\mathbb{C}^{2}$ if and only the tensor $F^{car}$ given below vanishes, $F^{car}=0$, \cite{Car}. \end{thrm} Here, the Chern-Moser tensor $\mathcal{S}$ is a pseudoconformaly invariant tensor, i.e., if $\phi$ is a smooth positive function and ${\tilde\eta}=\phi\eta$, then $\mathcal{S}_{\bar\eta}=\phi\mathcal{S}_{\eta}$. The Chern-Moser tensor $\mathcal{S}$ \cite{ChM} is determined completely by the {(1,1)-part of the curvature and the} Ricci 2-form, \begin{multline}\label{crpinv} S(X,Y,Z,V) =\frac12\Big[R(X,Y,Z,V)+R(JX,JY,Z,V)\Big]\\ -\frac{S}{4(n+1)(n+2)}\Bigl[g(X,Z)g(Y,V)-g(Y,Z)g(X,V) +\Omega(X,Z)\Omega(Y,V)-\Omega(Y,Z)\Omega(X,V)+2\Omega(X,Y)\Omega_s(Z,V) \Bigr]\\ -\frac1{2(n+2)}\Big[g(X,Z)\rho(Y,JV)-g(Y,Z)\rho(X,JV)+g(Y,V)\rho(X,JZ)-g(X,V)\rho(Y,JZ)\Big]\\ -\frac1{2(n+2)}\Big[\Omega(X,Z)\rho(Y,V)-\Omega(Y,Z)\rho(X,V)+\Omega(Y,V)\rho(X,Z)-\Omega(X,V)\rho(Y,Z)\Big]\\ -\frac1{n+2}\Big[\Omega(X,Y)\rho(Z,V)+\Omega(Z,V)\rho(X,Y)\Big]. \end{multline} For $n=1$ the tensor $S$ vanishes identically and the Cartan condition can be expressed by the vanishing of the \emph{the Cartan tensor} $[-1]$ type tensor $F^{car}$ defined on ${H}$ by (\cite{IVZ,IV2} \begin{multline}\label{crtreeF} F^{car}(X,Y)=\nabla^2 S(X,JY)+\nabla^2S(Y,JX)+16(\nabla ^2_{Xe_a}A)(Y,e_a) +16(\nabla^2_{Ye_a}A)(X,e_a)+36SA(X,Y)\\ +48(\nabla^2_{e_aJe_a}A)(X,JY)+3g(X,Y)\nabla^2S(e_a,Je_a). \end{multline} \subsection{Quaternionic Contact Structures}\label{ss:qc geometry} Following Biquard \cite{Biq1, Biq2}, a 4n+3 dimensional manifold $M^{4n+3}$ is quaternionic contact (qc) if we have: \begin{enumerate}[i)] \item a co-dimension three distribution $H$, which locally is the intersction of the kernels of three 1-forms on $M$, $H=\bigcap_{s=1}^3 Ker\, \eta_s$, $\eta_s\in \Gamma(M:T^*M)$; \item a 2-sphere bundle $\mathcal{Q}$ of "almost complex structures" locally generated by $I_s\,:H \rightarrow H,\quad I_s^2\ =\ -1$, satisfying $I_1I_2=-I_2I_1=I_3$; \item a "metric" tensor $g$ on $H$, such that, $g(I_sX,I_sY)\ =\ g(X,Y)$, and $2g(I_sX,Y) = d\eta_s(X,Y)$, for all $X,Y\in H$. \end{enumerate} We let $\omega_s(X,Y)=g(I_sX,Y)$ be the associated 2-forms. The "canonical" Biquard connection is the unique linear connection defined by the following theorem \cite{Biq1}. \begin{thrm}[ \cite{Biq1}] If $(M,\eta)$ is a qc manifold and $n>1$, then there exists a unique complementary to $H$ subbundle $V$, $TM=H\oplus V$ and a linear connection $\nabla$ with the properties that $V$, $H$, $g$ and the 2-sphere bundle $\mathcal{Q}$ are parallel and the torsion $T$ of $\nabla$ satisfies: a) for $ X, Y\in H, \quad T(X,Y)=-[X,Y]|_V\in V$;\hskip.2truein b) for $ \xi\in V,\ X\in H$, $T_\xi(X)\equiv T(\xi,X)\in H$ and $ T_\xi\in (sp(n)+sp(1))^{\perp}$. \end{thrm} Biquard also showed that the "vertical" space $V$ is generated by the Reeb vector fields $\{\xi_1,\xi_2,\xi_3\}$ determined by $ \eta_s(\xi_k)=\delta_{sk}, \quad (\xi_s\lrcorner d\eta_s)_{|H}=0, \quad (\xi_s\lrcorner d\eta_k)_{|H}=-(\xi_k\lrcorner d\eta_s)_{|H}. $ If the dimension of $M$ is seven, $n=1$, the Reeb vector fields might not exist. D. Duchemin \cite{D} showed that if we assume their existence, then there is a connection as before. Henceforth, by a qc structure in dimension $7$ we mean a qc structure satisfying the Reeb conditions Note that the extended Riemannian metric $h$ given by \eqref{hmetric} as well as the Biquard connection do not depend on the action of $SO(3)$ on $V$, but both change if $\eta$ is multiplied by a conformal factor. \subsubsection{Invariant decompositions} As usual any endomorphism $\Psi$ of $H$ can be decomposed with respect to the hypercomplex complex structure $I_s$, $s=1,2,3$ uniquely into its two $Sp(n)Sp(1)$-invariant parts. In short we will denote these components correspondingly by $\Psi_{[-1]}$ and $\Psi_{[3]}$. Furthermore, we shall use the same notation for the corresponding two tensor, $\Psi(X,Y)=g(\Psi X,Y)$. Explicitly, $\Psi=\Psi=\Psi_{[3]}+\Psi_{[-1]}$, where \begin{equation}{\label{comp}} \begin{aligned} \Psi_{[3]}(X,Y)=% \frac {1}{4}\left [ \Psi(X,Y)+\Psi(I_1X,I_1Y)+\Psi(I_2X,I_2Y)+\Psi(I_3X,I_3Y)\right ],\\ \Psi_{[-1]}(X,Y)=% \frac {1}{4}\left [3 \Psi(X,Y)-\Psi(I_1X,I_1Y)-\Psi(I_2X,I_2Y)-\Psi(I_3X,I_3Y)\right ]. \end{aligned} \end{equation} The above notation is justified by the fact that the $[3]$ and $% [-1]$ components are the projections on the eigenspaces of the operator $\Upsilon =\ I_1\otimes I_1+I_2\otimes I_2+I_3\otimes I_3, \quad (\Upsilon \Psi) (X,Y)\overset{def}{=}\Psi(I_1X,I_1Y)+\Psi(I_2X,I_2Y)+\Psi(I_3X,I_3Y)$ corresponding, respectively, to the eigenvalues $3$ and $-1$. Note that the metric $g$ belong{s} to the [3]-component, {while} the 2-forms $\omega_s, s=1,2,3$ belong to the [-1]-component since. Furthermore, the two components are orthogonal to each other with respect to $g$. For $n=1$ the [3]-component is 1-dimensional, $\Psi_{[3]}=\frac{tr\Psi}4g$. \subsubsection{Curvature of a Quaternionic Contact Structure} The curvature tensors are defined in a standard fashion using \eqref{e:curv tensor} and \eqref{qscs}. In addition we define the qc Ricci 2-forms \begin{equation}\label{neww} {\rho_s(A,B)=\frac{1}{4n}R(A,B,e_a,I_se_a)}, \quad s=1,2,3. \end{equation} Biquard \cite{Biq1} showed that the map $T_{\xi _j}=T^0_{\xi_j}\ +\ I_j U$, where $T^0_{\xi_j}$ is symmetric while $I_j U$ is a skew-symmetric map and $U\in \Psi_{[3]}$, $I_sU=UI_s$, $s=1,2,3$. Further properties were found in \cite{IMV}. A remarkable fact, \cite[Theorem 3.12]{IMV}, is that (unlike the CR case!) the torsion endomorphism determines the (horizontal) qc-Ricci tensor and the (horizontal) qc-Ricci forms of the Biquard connection. For this we also need the torsion type tensor $T^0\overset{def}{=} T^0_{\xi _1}\, I_1+T^0_{\xi _2}\, I_2+T^0_{\xi _3}\, I_3\in \Psi_{[-1]}$ introduced in \cite{IMV}. \begin{thrm}[\cite{IMV}] On a QC manifold $(M,\eta)$ we have \begin{equation}\label{sixtyfour} \begin{aligned} & Ric(X,Y) \ =\ (2n+2)T^0(X,Y) +(4n+10)U(X,Y)+\frac{S}{4n}g(X,Y)\\ &\rho_s(X,I_sY) \ =\ -\frac12\Bigl[T^0(X,Y)+T^0(I_sX,I_sY)\Bigr]-2U(X,Y)-% 8n(n+2)Sg(X,Y). \end{aligned} \end{equation} \end{thrm} We say that $M$ is a \emph{qc-Einstein manifold} if the horizontal Ricci tensor is proportional to the horizontal metric $g$, \begin{equation}\label{qcA}Ric(X,Y)=\frac{S}{4n}g(X,Y \end{equation} which taking into account \eqref{sixtyfour} is equivalent to $T^0=U=0$. Furthermore, by \cite[Theorem 4.9]{IMV} and \cite[Theorem 1.1]{IMV3} any qc-Einstein structure has constant qc-scalar curvature. It should be mentioned that qc-Einstein structures have proved useful in the construction of metrics with special holonomy \cite{AFISUV1} and heterotic string theory, see Section \ref{s:strings} for some details on the latter. Such applications are possible due to the following properties/ characterization of the qc-Einstein structures, see \cite[Theorem 1.3]{IV2} and \cite[Theorem 4.4.4]{IV3} for $S\not=0$ and \cite[Theorem 5.1]{IMV3} for $S=0$ cases, {respectively}. \begin{thrm}\label{str_eq_mod_th} Let $M$ be a qc manifold. The following conditions are equivalent: \begin{enumerate}[a)] \item $M$ is a qc-Einstein manifold; \item locally, the given qc-structure is defined by 1-form $(\eta_1,\eta_2,\eta_3)$ such that for some constant $S$ we have \begin{equation}\label{str_eq_mod} d\eta_i=2\omega_i+\frac{S}{8n(n+2)}\eta_j\wedge\eta_k; \end{equation} \item locally, the given qc-structure is defined by 1-form $(\eta_1,\eta_2,\eta_3)$ such that the corresponding connection 1-forms vanish on $H$, in fact, $\nabla I_i=-\alpha_j\otimes I_k+\alpha_k\otimes I_j$, $\nabla\xi_i=-\alpha_j\otimes\xi_k+\alpha_k\otimes\xi_j$ with $\alpha_s=-\frac{S}{8n(n+2)}\eta_s$. \end{enumerate} \end{thrm} In particular, in the positive scalar curvature case the qc-Einstein manifold are exactly the locally 3-Sasakian manifolds, i.e, for every $p\in M$ there exist an open neighborhood $U$ of $p$ and a matrix $\Psi\in \mathcal{C}^\infty(U:SO(3))$, s.t., $\Psi\cdot\eta$ is 3-Sasakian. {A (4n + 3)-dimensional (pseudo) Riemannian manifold $(M,g)$ is 3-Sasakian if the cone metric is a (pseudo) hyper-K\"ahler metric \cite{BG,BGN}. We note explicitly that in some questions it is useful to define 3-Sasakian manifolds in the wider sense of positive (the usual terminology) or negative 3-Sasakian structures, cf. \cite[Section 2]{IV2} and \cite[Section 4.4.1]{IV3} where the "negative" 3-Sasakian term was adopted in the case when the Riemannian cone is hyper-K\"ahler of signature $ (4n,4)$. As well known, a positive 3-Sasakian manifold is Einstein with a positive Riemannian scalar curvature \cite{Kas} and, if complete, it is compact with finite fundamental group due to Myer’s theorem. The negative 3-Sasakian structures are Einstein with respect to the corresponding pseudo-Riemannian metric of signature $(4n,3)$ \cite{Kas,Tan}. In this case, by a simple change of signature, we obtain a positive definite $nS$ metric on $M$, \cite{Tan,Jel,Kon}.} \subsubsection{The quaternionic Heisenberg Group $\boldsymbol {G\,(\mathbb{H})}$.}\label{ss:qc Heisenberg} {The basic example of a qc manifold is provided by the quaternionic Heisenberg group $\boldsymbol {G\,(\mathbb{H})}$} on which we introduce coordinates by regarding $\boldsymbol {G\,(\mathbb{H})}=\mathbb{H}^n\times\text{Im}\mathbb{H}$, $\quad (q,\omega)\in \boldsymbol {G\,(\mathbb{H})}$ so that the multiplication takes the form \eqref{e:H-type Iwasawa groups}. The "\emph{standard}" qc contact form in quaternion variables is $ \tilde\Theta= (\tilde\Theta_1,\ \tilde\Theta_2, \ \tilde\Theta_3)= \frac 12\ (d\omega - q \cdot d\bar q + dq\, \cdot\bar q) $ or, using real coordinates, \begin{gather}\label{e:Heisenbegr ctct forms}\notag \tilde\Theta_1 = \frac 12\ dx- x^\alpha d t^\alpha+ t^\alpha d x^\alpha-z^\alpha d y^\alpha + y^\alpha d z^\alpha, \quad \tilde\Theta_2= \frac 12\ dy- y^\alpha d t^\alpha+ z^\alpha d x^\alpha+ t^\alpha d y^\alpha - x^\alpha d z^\alpha,\\ \tilde\Theta_2= \frac 12\ dz- z^\alpha d t^\alpha- y^\alpha d x^\alpha+ x^\alpha d y^\alpha + t^\alpha d z^\alpha. \end{gather} The left-invariant horizontal vector fields are \begin{equation}\label{qHh} \begin{aligned} T_{\alpha} = \dta {} +2x^{\alpha}\dx {}+2y^{\alpha}\dy {}+2z^{\alpha}\dz {} , \ X_{\alpha} = \dxa {}-2t^{\alpha}\dx {}-2z^{\alpha}\dy {}+2y^{\alpha}\dz {} ,\\ Y_{\alpha} = \dya {} +2z^{\alpha}\dx {}-2t^{\alpha}\dy {}-2x^{\alpha}\dz {}, \ Z_{\alpha} = \dza {} -2y^{\alpha}\dx {}+2x^{\alpha}\dy {}-2t^{\alpha}\dz {}\,, \end{aligned} \end{equation} with corresponding sub-Laplacian \begin{equation}\label{e:qc Heis sub-laplacian} \triangle_{\tilde\Theta} u=\sum_{\alpha=1}^n \left (T_{\alpha}^2u+ X_{\alpha}^2u+Y_{\alpha}^2u+Z_{\alpha}^2u \right ). \end{equation} The (left-invariant vertical) Reeb fields $\xi_1,\xi_2,\xi_3$ are \xi_1=2\dx {}, \quad \xi_2=2\dy {},\quad \xi_3=2\dz {}. On $\boldsymbol {G\,(\mathbb{H})}$, the left-invariant flat connection is the Biquard connection, hence $\boldsymbol {G\,(\mathbb{H})}$ is a flat qc structure. It should be noted that the latter property characterizes (locally) the qc structure ${\tilde\Theta}$ by \cite[Proposition 4.11]{IMV}, but in fact vanishing of the curvature on the {horizontal} space is {enough} \cite[Propositon 3.2]{IV}. Thus, by \cite[Propositon 3.2]{IV}, a quaternionic contact manifold is locally isomorphic to the quaternionic Heisenberg group exactly when the curvature of the Biquard connection restricted to $H$ vanishes, $R_{|_H}=0$. \subsubsection{Standard qc-structure on 3-Sasakain sphere and the qc Cayley transform}\label{ss:qc sphere} {The second example is the {"s}tandard{"} qc-structure on the 3-Sasakain sphere.} The "standard" qc 3-form on the sphere $S^{4n+3}= \{\abs{q}^2+\abs{p}^2=1 \}\subset \mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}\times\mathbb{H}$, is \begin{equation}\label{e:stand cont form on S} \tilde\eta\ =\ dq\cdot \bar q\ +\ dp\cdot \bar p\ -\ q\cdot d\bar q -\ p\cdot d\bar p. \end{equation} We identify $\boldsymbol {G\,(\mathbb{H})}$ with the boundary $\Sigma$ of a Siegel domain in $\mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}\times\mathbb{H}$, $ \Sigma\ =\ \{ (q',p')\in \mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}\times\mathbb{H}\ :\ \text{Re} {\ p'}\ =\ \abs{q'}^2 \}, $ by using the map $(q', \omega')\mapsto (q',\abs{q'}^2 - \omega')$. The Cayley transform, $\mathcal{C}:S\setminus{\{\text{pt.}\}}\rightarrow \Sigma$, is \begin{equation*}\label{e:QC Cayley} (q', p')\ =\ \mathcal{C}\ \Big ((q, p)\Big)=( (1+p)^{-1} \ q , (1+p)^{-1} \ (1-p)). \end{equation*} By \cite[Section 8.3]{IMV} we have on $\boldsymbol {G\,(\mathbb{H})}$ \begin{equation}\label{e:Cayley transf ctct form group} \Theta\ \overset{def}{=}\ \lambda\ \cdot (\mathcal{C}^{-1})^*\, \tilde\eta\ \cdot \bar\lambda\ =\ \frac {8}{\abs{1+p'\, }^2}\, \tilde\Theta. \end{equation} where $\lambda\ = {\abs {1+p\,}}\, {(1+p)^{-1}}$ is a unit quaternion. Alternatively, on the sphere this can be written as \begin{equation}\label{e:Cayley transf ctct form} \eta\overset{def}{=}\mathcal{C}^*\,\tilde{\Theta}\ =\ \frac {1}{2\,\abs {1+p\, }^2}\, \lambda\, \tilde\eta\, \bar \lambda, \end{equation} where $\lambda$ is a unit quaternion. In any case, the above formulas show the Cayley transforms is a conformal quaternionic contact map. In addition, we can use it to determine the qc scalar curvature of the sphere $(S^{4n+3}, {\tilde\eta})$ and find a solution of the Yamabe equation on $\boldsymbol {G\,(\mathbb{H})}$. For $(q',p')\in \Sigma\subset \mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}\times\mathbb{H}, \quad p'=|q'|^2+ \omega'$, consider the function \begin{equation} \begin{aligned}\label{e:h and Phi} h& =\frac {1}{16}|1+p'|^2=\frac {1}{16}\left [(1+|q'|^2)^2+ |\omega'|^2 \right ], \\ \Phi & =\left({2h} \right)^{-(Q-2)/4} = 8^{(Q-2)/4}\ {\left [(1+|q'|^2)^2+ |\omega'|^2 \right ]}^{-(Q-2)/4}, \end{aligned} \end{equation} so that we have $$\Theta=\frac {1}{2h}\tilde\Theta =\Phi^{4/(Q-2)}\tilde\Theta.$$ A small calculation shows that the sub-laplacian of $h$ with respect to $\tilde\Theta$ is given by $\triangle_{\tilde\Theta} h = \frac {Q-6}{4} + \frac {Q+2}{4}|q'|^2$ and thus $\Phi$ is a solution of the qc Yamabe equation on the Heisenberg group, \begin{equation}\label{e:Yamabe for Phi} \triangle_{\tilde\Theta} \Phi = -K\, \Phi^{2^*-1}, \qquad K=(Q-2)(Q-6)/8. \end{equation} Denoting with $\mathcal{L}$ and $\mathcal{L}_{\tilde\Theta}$ the conformal sub-laplacians of $\Theta$ and $\tilde\Theta$, respectively, we have (see also \eqref{e:conf Yamabe volume}) $$\Phi^{-1}\mathcal{L} (\Phi^{-1}u) = \Phi^{-2^*}\mathcal{L}_{\tilde\Theta} u.$$ Taking $u=\Phi$ we come to $\mathcal{L} ( 1 ) = \Phi^{1-2^*}\triangle_{\tilde\Theta} \Phi$, since the qc structure ${\tilde\Theta}$ is flat, which shows \begin{equation}\label{e:S of standard qc sphere} S_{{\tilde\eta}}= S_{\Theta}=4\frac {Q+2}{Q-2}K=8n(n+2) \end{equation} using that the two structures are isomorphic via the diffe{o}morphism $\mathcal{C}$, or rather its extension, since we can consider $\mathcal{C}$ as a quaternionic contact conformal transformation between the whole sphere $ S^{4n+3}$ and the compactification $\hat{\Sigma}\cup {\infty}$ of the quaternionic Heisenberg group by adding the point at infinity, cf. \cite[Section 5.2]{IMV1}. \subsubsection{QC conformal flatness \cite{IV}} A QC manifold $(M,\eta)$ is called locally qc conformally flat if there is a local diffeomorphims $F:\boldsymbol {G\,(\mathbb{H})}\rightarrow M$, such that $F^*\eta=\phi\Psi \tilde\Theta$ for some positive function $\phi$. The qc-conformal flatness of a manifold is characterized by the vanishing of the \emph{qc-conformal curvature} tensor $W^{qc}$ found in \cite{IV}, \begin{multline*} W^{qc}(X,Y,Z,V)=\frac14\Big[R(X,Y,Z,V)+\sum_{s=1}^3R(I_sX,I_sY,Z,V)\Big]-\frac12\sum_{s=1}^3\omega_s(Z,V)\Big[T^0(X,I_sY)-T^0(I_sX,Y)\Big]\\+\frac{S}{32n(n+2)}\Big[(g\owedge g)(X,Y,Z,V)+\sum_{s=1}^3(\omega_s\owedge\omega_s)(X,Y,Z,V)\Big]+(g\owedge U)(X,Y,Z,V)+\sum_{s=1}^3(\omega_s\owedge I_sU)(X,Y,Z,V), \end{multline*} {where $(A\owedge B)$ denotes the Kulkarni-Nomizu product of two tensor, i.e., $$(A\owedge B)(X,Y,Z,V)=A(X,Z)B(Y,V)-A(Y,Z)(B(X,V)+A(Y,V)B(X,Z)-A(X,V)(B(Y,Z).$$ \begin{thrm}[\cite{IV}]\label{T:flat} a)$W^{qc}$ is qc-conformal invariant, i.e., if $\bar\eta=\kappa\Psi\eta$ then $ W^{qc}_{\bar\eta}=\phi\, W^{qc}_{\eta}$, where $0<\kappa\in \mathcal{C}^\infty(M)$, and $\Psi\in \mathcal{C}^\infty(M:SO(3))$. b) A qc-structure is locally qc-conformal to the standard flat qc-structure on the quaternionic Heisenberg group $\boldsymbol {G\,(\mathbb{H})}$ if and only if $W^{qc}=0$. \end{thrm} Taking into account the qc Cayley transform we also have the quaternionic sphere $S^{4n+3}$ if and only if the qc conformal curvature vanishes, $W^{qc}=0$. We end this section with the remark that unlike the CR case the realization of qc manifolds as hypersurfaces in a hyper-K\"ahler manifold is very restrictive. For example, it was shown in \cite{IMV'14} that if $M$ is a connected qc-hypersurface in the flat quaternion space $\mathbb{R}^{4n+4}\cong \mathbb{H}^{n+1}$ then, up to a quaternionic affine transformation of $\mathbb{H}^{n+1}$, $M$ is contained in one of the following three hyperquadrics: $$(i) \ \ |q_1|^2+\dots+|q_n|^2 + |p|^2=1,\qquad (ii)\ \ |q_1|^2+\dots+|q_n|^2 - |p|^2=-1 ,\qquad (iii)\ \ |q_1|^2+\dots+|q_n|^2 +\mathbb{R}{e}(p)=0.$$ Here $(q_1,q_2,\dots q_n,p)$ denote the standard quaternionic coordinates of $\mathbb{H}^{n+1}.$ In particular, if $M$ is a compact qc-hypersurface of $\mathbb{R}^{4n+4}\cong \mathbb{H}^{n+1}$ then, up to a quaternionic affine transformation of $\mathbb{H}^{n+1}$, $M$ is the standard 3-Sasakian sphere. For other results and more details we refer to \cite{IMV'14}. \section{The CR Yamabe problem and the CR Obata theorem} The CR Yamabe problem seeks pseudoconformal pseudohermitian transformation of a compact CR pseudohermitian manifold which lead to constant scalar curvature of the canonical Tanaka-Webster connection, see Section \ref{ss:Yamabe Iwasawa mnfld}. After the works of D. Jerison \& J. Lee \cite{JL1,JL2,JL3,JL4} and N. Gamara \& R. Yacoub \cite{Ga}, \cite{GaY} the solution of the CR Yamabe problem on a compact manifold is complete. {Let $(M^{2n+1}, \, \eta)$ be a strictly pseudoconvex CR manifold andf $\Upsilon(M,[\eta])$ be the CR-Yamabe constant (cf. \eqref{e:Yamabe constant Iwasawa}). The CR-Yamabe constant $\Upsilon (M,[\eta])$ depends only on the CR structure of M, not {on} the choice of $\eta$. } {The solution of the CR-Yamabe problem is outlined in the next fundamental results.} \begin{thrm} [\cite{JL2,JL4,Ga,GaY}] {Let $(M^{2n+1}, \, \eta)$ be a strictly pseudoconvex CR manifold. The CR-Yamabe constant satisfies the ineguality $ \Upsilon(M,[\eta]) \leq \Upsilon(S^{2n+1},\bar{\eta})$, where $S^{2n+1}\subset \mathbb{C}^{n+1}$ is the sphere with its standard CR structure $\bar{\eta}$.} \begin{enumerate} \item[a)] If $\Upsilon([M,\eta]) < \Upsilon(S^{2n+1},[\bar\eta])$, then the Yamabe equation has a solution. \cite{JL2} \item[b)] If $n\geq 2$ then the Yamabe constant satisfies \[ \Upsilon(M,[\eta_\epsilon])= \begin{cases} \Upsilon(S^{2n+1},[\bar\eta])\left ( 1-c_n\abs {S(q)}^2\epsilon^4\right ) +\mathcal{O}(\epsilon^5), \quad{n\geq 2;} \\ \Upsilon(S^{5},\bar\eta])\left ( 1+c_2\abs {S(q)}^2\epsilon^4\ln \epsilon\right ) +\mathcal{O}(\epsilon^4), \quad{n=2.} \end{cases} \] {Thus, if M is not locally CR equivalent to $(S^{2n+1},\bar\eta)$, then $\Upsilon(M,[\eta]) < \Upsilon(S^{2n+1},[\bar\eta])${,}} \cite{JL4}{.} \item[c)] If $n=1$ or $M$ is locally CR equivalent to $S^{2n+1}$, then the Yamabe equation has a solution{,} \cite{Ga,GaY}. \end{enumerate} \end{thrm} \subsection{Solution of the CR Yamabe problem on the sphere and Heisenberg group}\label{ss:Jerison and Lee} {The CR version of the Obata theorem was proved by D. Jerison and J. Lee \cite{JL3}.} \begin{thrm}[\cite{JL3}] If $\eta$ is the contact form of a pseudo-Hermitian structure proportional to the standard contact form $\bar\eta$ on the unit sphere in $\mathbb{C}^{n+1}$ and the pseudohermitian scalar curvature $S_\eta=$const, then up to a multiplicative constant $\eta=\Phi^* \,\bar\eta$ with $\Phi$ a CR automorphism of the sphere. \end{thrm} A key step of the proof consists of showing that a CR structure with a constant pseudohermitian scalar curvature is pseudoconformal to the standard pseudo-Einstein torsion-free structure on the CR sphere iff it is a pseudo-Einstein with vanishing Webster torsion. It is well known that a strictly pseudoconvex torsion-free CR manifold is Sasakian. In addition, if the CR space is pseudo{-}Einstein then it is not hard to observe that it is a Sasaki-Einstein space with respect to the associated Riemannian metric $h$. Indeed, the Ricci tensors $Ric^g$ and $Ric$ of the Levi-Civita and the Tanaka-Webster connection, respectively, of a torsion-free CR space are connected by \cite{DT} $Ric^g(X,Y)=Ric(X,Y)-2g(X,Y)$, $Ric^g(\xi,\xi)=2n$. Because of the second identity, a Sasaki{-} Einstein space has Riemannian scalar curvature $S^g={2n(2n+1)}$. Hence, a torsion-free pseudo Einstein CR manifold is a Sasaki Einstein if the pseudohermitian scalar curvature is equal to $S=4n(n+1)$ and the Jerison-Lee theorem can be stated as follows. \begin{thrm}[\protect\cite{JL3}]\label{jerl} If a compact Sasaki-Einstein manifold $(M,\bar\eta)$ is pse{u}doconformal to a CR manifold $(M,\eta=2h\bar\eta)$ with constant positive pseudohermmitian scalar curvature $S=4n(n+1)$ then $(M,\eta)$ is again a Sasaki-Einstein space. \end{thrm} The proof follows trivially from the divergence formula discovered in \cite{JL3} which we state in real coordinates in Theorem~\ref{t:JL thrm}. First, we need some definitions. Let $h>0$ be a smooth function on a pseudohermitian manifold $(M, \eta, g)$ and $\bar\eta=\frac{1}{2h}\eta$ be a pseudoconformal to $\eta$ contact form. We will denote the connection, curvature and torsion tensors of $\bar\eta$ by over-lining the same object corresponding to $\eta$. The new Reeb vector field $\bar\xi$ is \bar\xi\ =\ 2h\,\xi\ + 2h\ J\nabla h, where $\nabla h$ is the horizontal gradient, $g(\nabla h,X)=dh(X)$. The Webster torsion and the pseudohermitian Ricci tensors of $\eta$ and $\bar\eta$ are related by \cite{L2}, \begin{gather}\label{e:A conf change} 4h\bar A(X,JY)=4hA(X,JY)+\nabla^2h(X,Y)-\nabla^2 h(JX,JY \end{gather} \begin{multline}\label{e:Ric conf change h} \overline {\rho}(X,JY)=\rho(X,JY)-(4h)^{-1}(n+2)[\nabla^2h(X,Y)+\nabla^2h(Y,X)+\nabla^2h(JX,JY)+\nabla^2h(JY,JX)]\\+(2h^2)^{-1}(n+2)[ dh(X)dh(Y)+dh(JX)dh(JY) ] -(2h)^{-1}\left(\triangle h- h^{-1}(n+2)|\nabla h|^2\right)g(X,Y), \end{multline} {where $\triangle h=\nabla^*dh=\sum_{a=1}^{2n}\nabla^2 h(e_a,e_a)$ is the sublaplacian. The pseudohermitian scalar curvatures changes according to \cite{L2}, \begin{equation}\label{e:conf change scalar curv h} \overline {S} = 2hS - 2(n+1)(n+2)h^{-1}|\nabla h|^2 +4(n+1)\triangle h. \end{equation} Let $B$ be the traceless part of $\rho$, $B(X,JY)\overset{def}{=}\rho(X,JY)+\frac {S}{2n}g(X,Y)$ since by \eqref{e:CR Ric type decomp}, we have $\rho(e_a,Je_a)=-Ric(e_a,e_a)=-S$. The above formulas imply \begin{multline}\label{e:Ric_0 conf change h} \bar B(X,JY)=B(X,JY)-\frac{n+2}{4h}[\nabla^2h(X,Y)+\nabla^2h(Y,X)+\nabla^2h(JX,JY)+\nabla^2h(JY,JX)]\\+\frac{n+2}{2h^2}[ dh(X)dh(Y)+dh(JX)dh(JY) ] +\frac {n+2}{2n}\left(\frac{\triangle h}{h}-\frac 1{h^2}|\nabla h|^2\right)g(X,Y). \end{multline} Suppose $\bar \theta$ is Sasaki-Einstein structure, i.e., $\bar A=\bar B=0$ and both pseudo-Hermitain structures are of constant pseudohermitian scalar curvatures $\bar S=S=4n(n+1)$. With these assumptions \eqref{e:conf change scalar curv h} becomes \begin{equation}\label{sublap} \triangle h=n-2nh +\frac{(n+2)}{2h}|\nabla h|^2. \end{equation} At this point we recall the Ricci identities for the Tanaka-Webster connection \cite{L2} (see e.g. {\cite{IVZ,IV2}} for these and other expressions in real coordinates), \begin{equation}\label{Riden} \begin{split} \nabla^2h(X,Y)-\nabla^2h(Y,X)=-2\omega(X,Y)dh(\xi){,}\qquad \nabla^2h(X,\xi)-\nabla^2{h}(\xi,X)=A(X,\nabla h){,}\\ \nabla^3 h(X,Y,Z)-\nabla^3{h}(Y,X,Z)=-R(X,Y,Z,\nabla h)-2\omega(X,Y)\nabla^2 h(\xi,Z). \end{split} \end{equation} The contracted Bianchi identities for Tanaka-Webster connection \cite{L2} are \begin{equation}\label{bia} \begin{split} dS(X)=2\sum_{a=1}^{2n}(\nabla_{e_a}Ric)(e_aX)=-2\sum_{a=1}^{2n}(\nabla_{e_a}\rho)(e_a,JX)+4 (n-1)\sum_{a=1}^{2n}(\nabla_{e_a}A)(e_a,JX);\\ Ric(\xi,X)=\sum_{a=1}^{2n}(\nabla_{e_a}A)(e_a,Z);\quad dS(\xi)=2\sum_{a,b=1}^{2n}(\nabla^2_{e_ae_b}A)(e_a,e_b). \end{split} \end{equation} When $\bar A=0$, \eqref{e:A conf change} takes the form \begin{equation}\label{tor} 4A(X,JY)=-\Big[\nabla^2h(X,Y){-}\nabla^2h(JX,JY)\Big] . \end{equation} {Differentiating \eqref{tor} using the equation $\nabla J=0$, taking the trace in the obtained equality and applying the Ricci identities \eqref{Riden}, \eqref{e:CR Ric type decomp} and the CR Yamabe equation \eqref{e:conf change scalar curv h}, we find the next formula for the divergence of $A$,} \begin{multline}\label{divA1} \nabla^*A(JX)=-2\rho(JX,\nabla h)-\frac{n+2}h[\nabla^2(\nabla h,X)-2dh(JX)dh(\xi)]\\+2ndh(X)+\frac{n+2}{2h^2}|\nabla h|^2dh(X)-(2n+4)\nabla^2(JX,\xi), \end{multline} where the divergence of a 1-form $\alpha$ is $\nabla^*\alpha=\sum_{a=1}^{2n}(\nabla_{e_a}\alpha)e_a $. A substitution of \eqref{sublap} into \eqref{e:Ric_0 conf change h} and a use {of} the Ricci identities together with $\bar B=0$ give \begin{multline}\label{be} B(X,JY)=\frac{n+2}{2h}\Big[\nabla^2h(Y,X)+\nabla^2h(JY,JX)-2\omega(X,Y)dh(\xi)\Big]\\-\frac{n+2}{2h^2}[ dh(X)dh(Y)+dh(JX)dh(JY) ] -\frac {n+2}{2}\left(\frac1{h}-2+\frac 1{2h^2}|\nabla h|^2\right)g(X,Y). \end{multline} From $\rho(X,JY)=B(X,JY)-2(n+1)g(X,Y{)}$ and \eqref{be} it follows \begin{multline}\label{rho1} \rho(X,J\nabla h)=\frac{n+2}{2h}\Big[\nabla^2h(\nabla h,X)+\nabla^2h(J\nabla h,JX)-2dh(\xi)dh(JX)-\frac3{2h}|\nabla h|^2dh(X)-dh(X)\Big]\\-ndh(X). \end{multline} Substituting equation \eqref{rho1} into equation \eqref{divA1} shows \begin{equation}\label{divA2} \nabla^*A(JX)=\frac{n+2}4\Big[\frac{\nabla^2{\color{blue} h}(J\nabla h,JX)}{h^2}-\frac{|\nabla h|^2}{h^3}dh(X)-\frac1{h^2}dh(X)-\frac2{h}\nabla^2h(JX,\xi)\Big]. \end{equation} With the help of \eqref{tor}, \eqref{be} and \eqref{divA2} we define the following 1-forms \begin{equation} \begin{split} d(X)& =-4h^{-1}A(X,J\nabla h)=\frac{\nabla ^{2}h(\nabla h,X)-\nabla ^{2}h(J\nabla h,JX)}{h^{2}};\hspace{3cm} \\ e(X)& =\frac{2}{n+2}h^{-1}B(X,J\nabla h)=\frac{\nabla ^{2}h(\nabla h,X)+\nabla ^{2}h(J\nabla h,JX)}{h^{2}}-\frac{2dh(\xi )dh(JX)}{h^{2}} \\ & -\Big(\frac{1}{h^{2}}-\frac{2}{h}+\frac{3|\nabla h|^{2}}{2h^{3}}\Big)dh(X); \\ u(X)& =\frac{4}{n+2}\nabla ^{\ast }A(JX)=\frac{\nabla ^{2}{h}(J\nabla h,JX)}{% h^{2}}-\frac{2}{h}\nabla ^{2}h(JX,\xi )-\Big(\frac{|\nabla h|^{2}}{h^{3}}+% \frac{1}{h^{2}}\Big)dh(X). \end{split} \label{oneforms} \end{equation}% We obtain easily from \eqref{oneforms} the next identity \begin{equation} u(X)=\frac{e(X)-d(X)}{2}-\frac{2\nabla ^{2}h(JX,\xi )}{h}-\frac{1}{h^{2}}% \Big(\frac{1}{2}+h+\frac{|\nabla h|^{2}}{4h}\Big)dh(X)+\frac{dh(\xi )dh(JX)}{% h^{2}}. \label{iden1} \end{equation}% Define the following {tensors} \begin{equation} \begin{split} {D}(X,Y)& =-4A({X,Y}),\quad D^{h}(X,Y,Z)=h^{-1}\left [ {D}(.,Z)dh(.)\right ]_{{[1]}}, \\ {E}(X,Y)& =\frac{2}{n+2}B(X,Y),\quad E^{h}(X,Y,Z)=h^{-1}\left [{E}% (X,.)dh(.)\right ]_{{[-1]}}. \end{split} \label{3tens} \end{equation} {In other words, we have \begin{align*} D^{h}(X,Y,Z) & =\frac{1}{2h}\left[ dh(X)D(Y,Z)+dh(JX){D}(JY,Z)% \right],\\ E^{h}(X,Y,Z)& =\frac{1}{2h}\left[ dh(Z){E}(X,Y)-dh(JZ){E}(X,JY)% \right]. \end{align*} } At this point we can state one of the main results of \cite{JL3}. \begin{thrm}[\protect\cite{JL3}] \label{t:JL thrm} Let $(M,\bar{\eta})$ be a Sasaki-Einstein manifold pseudoconformally equivalent to a CR manifold $(M,\eta ,\bar{\eta}=\frac{1}{% 2h}\eta )$ of constant pseudohermitian scalar curvature so that $\bar{S}% =S=4n(n+1)$. For \begin{equation} f=\frac{1}{2}+h+\frac{|\nabla h|^{2}}{4h}, \label{function} \end{equation}% we have \begin{multline} \nabla ^{\ast }\Big(f[{d}+{e}]-dh(\xi )J{d}+dh(\xi )J{e}-6dh(\xi )J{u}\Big) \label{divm} \\ =\frac{1}{2}\left( \frac{1}{2}+h\right) \left( |{D}|^{2}+|{E}|^{2}\right) +\frac{% h}{8}|E^h{+}D^h|^{2}+Q(d,e,u). \end{multline}% where $Q(d,e,u)$ is non-negative quadratic form of {the vector fields} $(d,e,u) $. \end{thrm} \begin{proof} {We recall, that for a horizontal 1-form $\alpha$ the 1-form $J\alpha$ is defined by $J\alpha(X)=-\alpha(JX)$. The divergences of the involved vector fields are calculated using \eqref{oneforms} and the Bianchi identies % \eqref{bia}. Since $S=4n(n+1)$ the Bianchi identities \eqref{bia} take the form \begin{equation} \label{biac} \nabla^*B(JX)=2(n-1)\nabla^*A(JX)=\frac{(n+2)(n-1)}2u(X), \quad \nabla^*Ju=% \frac{4}{n+2}\sum_{a,b=1}^{2n}(\nabla^2_{e_ae_b}A)(e_a,e_b)=0. \end{equation} A direct computation gives \begin{equation} \label{divD} \nabla^*d=\sum_{a=1}^{2n}(\nabla_{e_a}D)(e_a)=-h^{-1}D(\nabla h)-(n+2)h^{-1}u(J\nabla h)+\frac12|D|^2. \end{equation} Using the properties of $A$, we calculate \begin{equation} \label{divJD} \nabla^* (Jd)=h^{-1}d(J\nabla h)+(n+2)h^{-1}u(J\nabla h), \end{equation} taking into account $\sum_{a,b=1}^{2n}A(e_a,Je_b)\nabla^2h(Je_a,e_b)=0$ due to \eqref{tor}% . Similarly, we calculate \begin{equation} \label{divE} \nabla^* e =(n-1)h^{-1}u(\nabla h) +\frac12|{E}|^2 \end{equation} after using the equality $\frac2{n+2}h^{-1}\sum_{a,b=1}^{2n}B(e_a,Je_b)% \nabla^2h(e_a,e_b)=h^{-1}e(\nabla h)+\frac12|{E}|^2$ follows from \eqref{be} and \eqref{3tens}. Finally, we have \begin{equation} \label{divJE} \nabla^*(Je)=(n-1)h^{-1}u(J\nabla h) \end{equation} since $B(JX,JX)=0$ and $\sum_{a,b=1}^{2n}B(e_a,Je_b)\nabla^2h(Je_a,e_b)=0$ due to \eqref{be}. We obtain from \eqref{function} after applying the Ricci identities and \eqref{oneforms} the identity \begin{equation} \label{funcder} df(X) =\frac{h}2\Big(u(X)+d(X)\Big)+\nabla^2h(JX,\xi)+h^{-1}fdh(X)-h^{-1}dh(% \xi)dh(JX). \end{equation} At this point we are ready to calculate the divergence formula using \eqref{divD},\eqref{divJD},% \eqref{divE},\eqref{divJE}, \eqref{funcder} and \eqref{iden1} which give \begin{multline} \label{divmf} \nabla ^{\ast }\Big(f[d+e]-dh(\xi )Jd+dh(\xi )Je-6dh(\xi )Ju\Big) \\ =\frac{1}{2}\left ( \frac{1}{2}+h+\frac{|\nabla h|^{2}}{4h} \right)\Big[|{D}|^{2}+|{E}|^{2}\Big]+\frac{h}{2}\Big[% |d|^{2}+|e|^{2}+6|u|^{2}+4g(d,u)-4g(u,e)\Big] \\ =\frac{1}{2}\left( \frac{1}{2}+h\right) \left( |{D}|^{2}+|{E}|^{2}\right) +% \frac{h}{4}|D^h+E^h|^{2}+\frac{h}{2}\left( |d|^{2}+|e|^{2}+6|u|^{2}+4g(d,u)-4g(u,e)-2g(d,e)\right) \\ =\frac{1}{2}\left( \frac{1}{2}+h\right) \left( |{D}|^{2}+|{E}|^{2}\right) +% \frac{h}{4}|D^{h}+E^{h}|^{2}+\frac{h}{2}Q(d,e,u) \end{multline} with $Q=% \begin{bmatrix} 1 & -1/2 & 2 \\ -1/2 & 1 & -2 \\ 2 & -2 & 6% \end{bmatrix}% $ using the next identity in the last equality \begin{equation}\label{e:key 3-tensors} \frac{|\nabla h|^{2}}{4h}\left( |{D}|^{2}+|{E}|^{2}\right) =\frac{h}{2}% |D^h+E^h|^{2}-{h}g(d,e). \end{equation} Notice that $Q$ has eigenvalues $\frac {15\pm\sqrt {209}}{4}$ and $\frac 12$, hence it is a positive definite matrix. } {Finally, the validity of \eqref{e:key 3-tensors} can be seen as follows, \begin{multline} |D^h+E^h|^2=\frac {1}{4h^2}\left \vert dh(e_a){D}(e_b,e_c)+dh(Je_a){D}(Je_b,e_c) +dh(e_c){E}(e_a,e_b)-dh(Je_c){E}(e_a,Je_b) \right\vert^2\\ =\frac {1}{2h^2}|\nabla h|^2\left (| {D}|^2+ |{E}|^2\right) +\frac {2}{h^2}\left ({D}(\nabla h,e_a){E}(\nabla h,e_a)\right ) =\frac {1}{2h^2}|\nabla h|^2\left (| {D}|^2+ | {E}|^2\right) + {2}g(d,e). \end{multline} } \end{proof} \subsection{The uniqueness theorem in a Sasaki-Einstein class}\label{ss:Sasaki-Einstein uniqieness Yamabe} Motivated by Theorem \ref{t:Obata Yamabe} it is natural to investigate the uniqueness of the pseudohermitian structures of constant scalar curvature in the Sasaki-Einstein case, especially in view of Theorem \ref{jerl}. The fact that the divergence formula of \cite{JL3} can be stated as in Theorem \ref{jerl} was observed earlier in \cite{IMV} which influenced the results \cite{IMV,IMV1} in which there is a clear separation of the two steps of Jerison and Lee's argument, the first involving the conformal equivalence of an "Einstein structure" to a structure of "constant scalar curvature" and the second involving the characterization of the conformal equivalence of two (conformally flat) Einstein structures. A corresponding QC version of the Obata uniqueness theorem was formulated by the second author. Clearly, {in the CR case} Theorem \ref{jerl} addresses the first step, while a part of the second step is contained in \cite{JL3} where the (suitable) conformal factor is characterized as a pluriharmonic function. For the completion of the second step one can reduce to the result mentioned in Remark \ref{r:sphere charcat} with an argument employed in \cite[Theorem 1.3]{IVO} (see also the end of Section \ref{ss:CR cpct Obata proof}) rather than relying on the calculation on the Heisenberg group when in the pseudoconformal class of the Sasaki-Einstein sphere as Jerison \& Lee did. For details of this last reduction see \cite{Wang1}. {Alternatively, a conceptual proof using again in the first step the Jerison \& Lee's divergence formula and as a second step a generalization of \eqref{e:lap of div conf v.f.} is possible based on the proof found in the quaternionic contact case \cite{IMV15a}. Next, we sketch briefly the obvious adaption of the argument from the quaternionic contact case \cite{IMV15a}. First, we use the well known fact that a vector field $Q$ on a CR pseudo-Hermitian manifold is an infinitesimal CR transformation iff there is a (smooth real-valued) function $\sigma$ such that $Q=-\frac 12J\nabla\sigma-\sigma(\xi)\xi$ and $\sigma$ satisfies the second order equation $\mathcal{L}_QJ=0$, see \cite{ChLee}. In fact, decomposing $Q$ in its horizontal and vertical parts $Q=Q_H-\sigma\xi$ it follows that $Q_H$ ("contact Hamiltonian field") is determined by $\eta(Q_H)=0$ and $i_{Q_H}\, d\eta\equiv 0 \ \ (mod\ \eta)$ while the preservation of the complex structure gives the second order system $[\nabla^2\sigma]_{[-1]}(X,Y)=\sigma A(JX,Y)$. Next, as a consequence of the CR Yamabe equation one obtains a formula as in Lemma \ref{e:lap of div conf v.f.} for an infinitesimal CR automorphism $Q$ on $(M,\eta)$, namely \begin{equation}\label{e:lapdivCR} \Delta(\nabla^*Q_H)\ =\ -\ \frac{n-2}{2(n+2)}Q(\text{Scal})\ -\ \frac{\text{Scal}}{2(n+2)}\nabla^*Q_H, \end{equation} where $Q_H$ is the horizontal part of $Q$. In our case, where $A=0$, for $\sigma=dh(\xi)$ it follows from Ricci's identity \eqref{e:CR XYxi Ricci} and \eqref{tor} that the vector field $Q$ defined by $Q=-\frac 12 J \nabla dh(\xi)-dh(\xi)\xi$ is an infinitesimal CR vector field unless it vanishes. Now, for $f$ defined in \eqref{function}, from \eqref{iden1} and \eqref{funcder} it follows $Q=-\frac 12 \nabla f-dh(\xi)\xi$. This implies that $\phi=\triangle f$ either vanishes identically or is an eigenfuction of the sublaplacian realizing the smallest possible eigenvalue on a (pseudo-Einstein) Sasakian manifold. Finally, if $h\not=const$ then the CR Lichnerowicz-Obata theorem \cite{CC09a,CC09b}, see Section \ref{ss:CR Obata}, shows that $(M,\eta)$ is homothetic to the CR unit sphere, which completes the proof. We note that the above arguments have as a corollary that in Jerison \& Lee's identity \cite[(3.1)]{JL3}, letting $\phi=\triangle_b Re(f)$ we have $\triangle_b \phi =-2n\phi$. } Thus, a pseudoconformal class of a Sasaki-Einstein pseudohermitian form different from the {standard} Sasaki-Einstein form on the round sphere contains a unique (up to homothety) pseudohermitian form of constant CR scalar curvature, namely, the Sasaki-Einstein form itself. \section{The qc-Yamabe problem and the Obata type uniqueness theorem} In this section we consider the quaternionic contact version of the Yamabe problem described in Section\ref{ss:Yamabe Iwasawa mnfld}. {We begin by quoting the next result of Wang \cite{Wei} which follows from the known techniques in the Riemannian and CR settings.} \begin{thrm}[\cite{Wei}] Let $(M,\eta)$ be a compact quaternionic contact manifold of real dimension $4n+3$. \begin{enumerate}[a)] \item The qc Yamabe constant satisfies the inequality $\Upsilon (M,[\eta])\leq \Upsilon (S^{4n+3}, ([\tilde\eta])$. \item If $\Upsilon(M,[\eta])<\Upsilon(S^{4n+3}, [\tilde\eta])$, then the Yamabe problem has a solution. \end{enumerate} \end{thrm} In view of the Riemannian and CR cases, it is expected that on a compact qc manifods $\Upsilon(M,[\eta])< \Upsilon(S^{4n+3}, [\tilde\eta])$ unless the qc manifold $(M,[\eta])$ is locally qc conformal to $(S^{4n+3},[\tilde\eta])$. Some steps towards the proof of this result include the qc-normal coordinates constructed in \cite{Kun} and the qc conformal tensor, $|W^{qc}|^2$ \cite{IV}. Another known general result is the (local) classification of all (local) qc-conformal transformations of the flat structure on the $\boldsymbol {G\,(\mathbb{H})}$ which are also qc-Einstein. This classification is used as a replacement of the theory of the pluriharmonic functions that appear in the CR case. Some attempts in extending the latter in the quaternionic (contact) case can be found in \cite{IMV}. However, so far, these extensions have not proven useful in the solution of the Yamabe problem. The following Theorem precises the result of \cite[Theorem 1.1]{IMV} where only the vanishing qc-scalar curvature case was considered. \begin{thrm}\label{t:einstein preserving} Let $\Theta=\frac{1}{2h}\tilde\Theta$ be a conformal deformation of the standard qc-structure $\tilde\Theta$ on the quaternionic Heisenberg group $\boldsymbol {G\,(\mathbb{H})}$. If $\Theta$ is also qc-Einstein, then up to a left translation the function $h$ is given by \begin{equation}\label{e:Liouville conf factor} h(q,\omega) \ =\ c_0\ \Big [ \big ( \sigma\ +\ |q+q_0|^2 \big )^2\ +\ |\omega\ +\ \omega_o\ + \ 2\ \text {Im}\ q_o\, \bar q|^2 \Big ], \end{equation} for some fixed $(q_o,\omega_o)\in \boldsymbol {G\,(\mathbb{H})}$ and constants $c_0>0$ and $\sigma\in \mathbb{R}$. Furthermore, \begin{equation}\label{e:scal for qc-einstein conf to flat} S_{\Theta}=128n(n+2)c_0\sigma \end{equation} \end{thrm} The proof follows from a careful reading of the proof of \cite[Theorem 1.1]{IMV} and making of the necessary changes. As in \cite[Theorem 1.1]{IMV}, $h$ satisfies a system of partial differential equations whose solution is a family of polynomial of degree four. The final known general result concerns the seven dimensional case {while the higher dimensions are settled in the preprint \cite{IMV15a}}. \begin{thrm}[\cite{IMV1}]\label{t:div formula} {\ If a quaternionic contact structure $(M^7,\eta)$ is conformal to a qc-Einstein structure $(M^7,\bar\eta)$}, $% \bar\eta\ =\ \frac{1}{2h}\, \eta$ so that $S=\bar S=16n(n+2)$, then $(M^7,\eta)$ is also qc-Einstein. \end{thrm} The above results lead to a complete solution of the qc Yamabe problem on the standard qc \emph{seven dimensional} sphere and quaternionic Heisenberg groups. In particular, as conjectured in \cite{GV}, all solutions of the qc Yamabe equations are given by those that realize the Yamabe constant of the sphere or the best constant in the Folland-Stein inequality. \begin{thrm}[\cite{IMV1}]\label{t:Yamabe} \begin{enumerate}[a)] \item Let $\tilde\eta=\frac{1}{2h}\eta$ be a conformal deformation of the standard qc-structure $\tilde\eta$ on the quaternionic unit sphere $S^{7}$. If $\eta$ has constant qc-scalar curvature, then up to a multiplicative constant $\eta$ is obtained from $\tilde\eta$ by a conformal quaternionic contact automorphism. In particular, $\Upsilon(S^7)= 48\, (4\pi)^{1/5}$ and this minimum value is achieved only by $\tilde\eta$ and its images under conformal quaternionic contact automorphisms. \item On the the seven dimensional quaternionic Heisenberg group the only solutions of the qc-Yamabe equation, up to translations \eqref{translation} and dilations \eqref{scaling}, are those given in \eqref{e:FS extremal fns}. \end{enumerate} \end{thrm} The proof of Theorem~\ref{t:Yamabe} relies on Theorem~\ref{t:einstein preserving} and Theorem~\ref{t:div formula} and will be sketched near the end of the Section. \subsection{The Yamabe problem on a 7-D qc-Einstein manifold. Proof of Theorem \ref{t:div formula}} In this section we give some details on the proof of Theorem \ref{t:div formula}. The analysis involves a number of intrinsic to the structure vector fields/ 1-forms which are defined in any dimension, so here $n\geq 1$. We shall consistently keep the notation introduced in \cite{IMV1}, which can be consulted for details. \subsubsection{Intrinsic vector fields and their divergences}\label{ss:v.f. and div any n} We begin by defining the horizontal 1-forms $A_s$, also letting $ A=A_1+A_2+A_3$, \begin{equation}\label{d:A_s} A_i(X)=\omega_i([\xi_j,\xi_k],X). \end{equation} The contracted Bianchi identity on a $(4n+3)$-dimensional qc manifold with constant qc-scalar curvature reads \cite{IMV}, Theorem 4.8], \begin{equation} \label{div:To} \nabla^*T^0=(n+2){A}, \qquad \nabla^*U=\frac{1-n}{2}{A}. \end{equation} Let $h$ be a positive smooth function on a qc manifold $(M, g, \eta)$ and $\bar\eta\ =\ \frac{1}{2h}\, \eta$ be a conformal deformation of the qc structure $\eta$. As usual, the objects related to $\bar \eta$ will be denoted by an over-line. Thus, $$d\bar\eta=-\frac1{2h^2}dh\wedge\eta+\frac1{2h}d\eta,\quad \bar g=\frac1{2h}g.$$ The new Reeb vector fields $\{\bar\xi_1,\bar\xi_2,\bar\xi_3\}$ are $\xi_s=2h\xi_s+I_s\nabla h, s=1,2,3.$ The components $T^0$, $U$ of the Biquard connection and the qc scalar curvatures change as follows \cite{IMV1} \begin{equation}\label{Torh} \bar T^0(X,Y)=T^0(X,Y)+\frac1{4h}\Big(3\nabla^2h(X,Y)-\sum_{s=1}^3\nabla^2h(I_sX,I_sY)\Big) -\frac1{2h}\sum_{s=1}^3dh(\xi_s)\omega_s(X,Y). \end{equation} \begin{multline}\label{defU} \bar U(X,Y)=U(X,Y)+\frac1{8h}\Big(\nabla^2h(X,Y)+\sum_{s=1}^3\nabla^2h(I_sX,I_sY)\Big)\\ -\frac1{4h^2}\Big(dh(X)dh(Y)+\sum_{s=1}^3dhI_sX)dh(I_sY)\Big)-\frac1{8h}\Big(\triangle h-\frac2{h}|\nabla h||^2\Big)g(X,Y), \end{multline} \begin{equation}\label{defs} \bar S=2hS+8(n+2)\triangle h-8(n+2)^2h^{-1}{|\nabla h|}^2. \end{equation} {Suppose $\bar\eta$ is a positive 3-Sasakian structure, i.e. $\bar T^0=\bar U=0, \bar S=16n(n+2)$. Then \eqref{defs} takes the form \begin{equation}\label{defsy} 2n=4nh+\triangle h-(n+2)h^{-1}{|\nabla h|}^2, \end{equation} which is the qc Yamabe equation. We also have the formulas \begin{multline} \label{e:A_s} A_1(X)\ =\ -\frac12 h^{-2}dh(X)\ -\ \frac 12h^{-3}\lvert \nabla h \rvert^2dh(X) -\ \frac 12 h^{-1}\Bigl (\ {\nabla dh} (I_2X, \xi_2)\ +\ \ {\nabla dh} (I_3X, \xi_3) \Bigr )\\ +\ \frac 12 h^{-2}\Bigl (dh(\xi_2)\,dh (I_2X)\ +\ dh(\xi_3)\,dh (I_3X) \Bigr ) +\ \frac 14 h^{-2}\Bigl ( \ {\nabla dh} (I_2X, I_2 \nabla h)\ +\ \ {\nabla dh% } (I_3X, I_3 \nabla h) \Bigr ). \end{multline} The expressions for $A_2$ and $A_3$ can be obtained from the above formula by a cyclic permutation of $(1,2,3)$. Thus, we obtain \begin{multline} \label{e:A} A(X)\ =\ -\frac32 h^{-2}dh(X)\ -\ \frac 32h^{-3}\lvert \nabla h \rvert^2dh(X) -\ h^{-1}\sum_{s=1}^3\ {\nabla dh} (I_sX, \xi_s)\ \\+\ h^{-2}\sum_{s=1}^3dh(\xi_s)\,dh (I_sX)\ +\ \frac 12 h^{-2}\sum_{s=1}^3\ {% \nabla dh} (I_sX, I_s \nabla h). \end{multline} We need the divergences of various vector/1-forms defined above in addition to a few more. We recall that an orthonormal frame $\{e_1,e_2=I_1e_1,e_3=I_2e_1,e_4=I_3e_1,\dots, e_{4n}=I_3e_{4n-3}, \xi_1, \xi_2, \xi_3 \}$ is called qc-normal frame at a point of a qc manifold if the connection 1-forms of the Biquard connection vanish at that point. As shown in \cite{IMV}, see also \cite[Lemma 6.2.1]{IV2}, a qc-normal frame exists at each point of a qc manifold. If $\sigma$ is a horizontal 1-form, then with respect to a qc-normal frame, the divergence of $I_s\sigma$, ( $I_s\sigma(X) = -\sigma(I_sX)$) is given by \begin{equation*} \nabla^* (I_s\sigma)\ \ =\ - \sum_{a=1}^{4n}(\nabla_{e_a} \sigma)(I_se_a). \end{equation*} With some calculations using \eqref{e:A}, \eqref{e:A_s} and the properties of the torsion and curvature of the Biquard connection, we obtain \begin{equation}\label{diverAA} \begin{aligned} \nabla^*\, \Bigl (\sum_{s=1}^3 dh(\xi_s) I_sA_s\Bigr )= \sum_{s=1}^3\sum_{a=1}^{4n} \ \nabla dh\,(I_s e_a, \xi_s)A_s(e_a),\\ \nabla^*\, \Bigl (\sum_{s=1}^3 dh(\xi_s) I_sA \Bigr )= \sum_{s=1}^3 \sum_{a=1}^{4n}\ \nabla dh\,(I_s e_a, \xi_s)A(e_a). \end{aligned} \end{equation} We define the following one-forms for $s=1,2,3,$ \begin{equation} \label{d:D_s} \begin{aligned} D_s(X)\ =\ - \frac1{2h}\Big[T^0(X,\nabla h)+T^0(I_sX,I_s\nabla h)\Big], D\ =\ - \frac1{h}\,T^{0}(X,\nabla h), F_s(X)\ =\ - \frac1{h}\, {T^0}(X,I_s\nabla h). \end{aligned} \end{equation} Using the fact that the tensor $T^0$ belongs to the [-1]-component we obtain from \eqref{d:D_s} \begin{equation} \label{d:D_s1} \begin{aligned} D\ =\ D_1\ +\ D_2\ +\ D_3,\qquad F_i(X)\ =\ -D_i(I_iX)\ +\ D_j(I_i X)\ +\ D_k(I_iX), \end{aligned} \end{equation} where $(ijk)$ is a cyclic permutation of (1,2,3). As a consequence of \eqref{div:To}, \eqref{e:A_s}, \eqref{e:A}, the qc Yamabe equation \eqref{defsy} and \eqref{Torh} taken with $\bar A=0$, we obtain after some calculations, see \cite{IMV1} for details, the following theorem. \begin{lemma}[\cite{IMV1}] \label{l:div of D} Suppose $(M, \eta)$ is a quaternionic contact manifold with constant {\ qc-}scalar curvature $S=16n(n+2)$. {Suppose $% \bar\eta=\frac1{2h}\eta$ has vanishing $[-1]$-torsion component $\overline T^0=0$}. Then we have \begin{equation*} D(X)\ =\ \frac 14 h^{-2}\Bigl (3\ {\nabla dh} (X, \nabla h)\ -\ \sum_{s=1}^3\ {\nabla dh} (I_sX, I_s\nabla h) \Bigr )+\ h^{-2}\sum_{s=1}^3dh(\xi_s)\,dh (I_sX). \end{equation*} The divergence of $D$ is given by $\nabla^*\, D\ =\ \lvert T^0 \rvert^2\ -h^{-1}\sum_{a=1}^{4n}dh(e_a)D(e_a)\ -\ h^{-1} (n+2)\,\sum_{a=1}^{4n}dh(e_a)A(e_a),$ while the divergence of $\sum_{s=1}^3 dh(\xi_s) F_s$ is \begin{multline*} \nabla^*\, \Bigl (\sum_{s=1}^3 dh(\xi_s) F_s\Bigr )\ =\ \sum_{s=1}^3 \sum_{a=1}^{4n}\Bigl [% \ \nabla dh\, (I_se_a,\xi_s)F_s(I_se_a)\Bigr] \\ + \ h^{-1}\sum_{s=1}^3 \sum_{a=1}^{4n}\Bigl[dh(\xi_s)dh (I_se_a)D(e_a)\ +(n+2)\,dh(\xi_s)dh (I_s e_a)\, A(e_a)\Bigr ]. \end{multline*} \end{lemma} \subsubsection{Solution of the qc-Yamabe equation in 7-D}\label{ss:7D QC Heisenebrg Yamabe} At this point we restrict our considerations to the 7-dimensional case turn to the proof of a key divergence formula motivated by the Riemannian and CR cases of the considered problem. As in the CR case \cite{JL3}, the Bianchi identities \cite[Theorem 4.8]{IMV} are not enough for the proof, unlike what happens in the Riemannian case as we saw in the proof of Theorem \ref{t:Obata Yamabe}. In fact, the proof of Theorem~\ref{t:div formula} follows by an integration of the following divergence formula \eqref{e:div formula}, which implies $T^0=0$. In dimension seven the tensor $U$ vanishes identically, $U=0$ , and \eqref{sixtyfour} yields the claim. Thus, the crux of the proof of Theorem \ref{t:div formula} is the next formula, in which for $ f\ = \ \frac 12\ +\ h\ +\ \frac 14 h^{-1}\lvert \nabla h \rvert^2$, the following identity holds true \begin{equation}\label{e:div formula} \nabla^*\Big(fD\ +\ \sum_{s=1}^3 dh(\xi_s)\, F_s \ +\ 4\sum_{s=1}^3 dh(\xi_s)I_sA_s \ -\ \frac {10}{3}\sum_{s=1}^3 dh(\xi_s)\,I_s A \Big) = f||T^0||^2 + hVLV^t. \end{equation} Here, $L$ is the following positive semi-definite matrix \begin{equation*} L=\left[ {% \begin{array}{cccccc} 2\, & 0 & 0 & {\displaystyle\frac{10}{3}}\, & -{\displaystyle\frac{2}{3}}\, & -{\displaystyle\frac{2}{3}}\, \\[2ex] 0 & 2\, & 0 & -{\displaystyle\frac{2}{3}}\, & {\displaystyle\frac{10}{3}}\, & -{\displaystyle\frac{2}{3}}\, \\[2ex] 0 & 0 & 2\, & -{\displaystyle\frac{2}{3}}\, & -{\displaystyle\frac{2}{3}}\, & {\displaystyle\frac{10}{3}}\, \\[2ex] {\displaystyle\frac{10}{3}}\, & -{\displaystyle\frac{2}{3}}\, & -{% \displaystyle\frac{2}{3}}\, & {\displaystyle\frac{22}{3}}\, & -{\displaystyle% \frac{2}{3}}\, & -{\displaystyle\frac{2}{3}}\, \\[2ex] -{\displaystyle\frac{2}{3}}\, & {\displaystyle\frac{10}{3}}\, & -{% \displaystyle\frac{2}{3}}\, & -{\displaystyle\frac{2}{3}}\, & {\displaystyle% \frac{22}{3}}\, & -{\displaystyle\frac{2}{3}}\, \\[2ex] -{\displaystyle\frac{2}{3}}\, & -{\displaystyle\frac{2}{3}}\, & {% \displaystyle\frac{10}{3}}\, & -{\displaystyle\frac{2}{3}}\, & -{% \displaystyle\frac{2}{3}}\, & {\displaystyle\frac{22}{3}}\,% \end{array}% }\right] \end{equation*}% and $V=( D_1, D_2, D_3,A_1,A_2, A_3)$ with $A_s$, $D_s$ defined, correspondingly, in \eqref{d:A_s} and % \eqref{d:D_s}. We sketch the proof of \eqref{e:div formula}. Recall that in dimension seven, $n=1$, the [3]-part of the Biquard torsion vanishes identically, $U=0$. Then \eqref{defU} together with the Yamabe equation \eqref{defsy} imply \begin{equation}\label{defUh} \nabla^2h(X,\nabla h)+\sum_{s=1}^3\nabla^2h(I_sX,I_s\nabla h)- (2-4h+3h^{-1}{|\nabla h|}^2)dh(X)=0. \end{equation} Combining \eqref{e:A_s}, \eqref{diverAA}, \eqref{defUh} and formulas in Lemma~\ref{l:div of D} it is easy to check the formula of the theorem. It is not hard to see that the eigenvalues of $L$ are given by \begin{equation*} \{0,\quad 0,\quad 2\,(2+\sqrt{2}),\quad 2\,(2-\sqrt{2}),\quad 10,\quad 10\}, \end{equation*}% \noindent which shows that $L$ is a non-negative matrix. \subsubsection{The 7-D qc Yamabe problem on the sphere and qc Heisenberg group} At this point we are ready to complete the proof of Theorem \ref{t:Yamabe}. Recall, that the Cayley transform \eqref{e:Cayley transf ctct form} is a conformal quaternionic contact diffeomorphism, hence up to a constant multiplicative factor and a quaternionic contact automorphism the forms $\mathcal{C}% _*\tilde\eta$ and $\tilde\Theta$ are conformal to each other. It follows that the same is true for $\mathcal{C}_*\eta$ and $\tilde\Theta$. In addition, $\tilde\Theta$ is qc-Einstein by definition, while $\eta$ and hence also $\mathcal{C}_* \eta$ are qc-Einstein as we already observed. According to Theorem~\ref{t:einstein preserving}, up to a multiplicative constant factor, the forms $\mathcal{C}_*\tilde\eta$ and $% \mathcal{C}_*\eta$ are related by a translation or dilation on the Heisenberg group. Hence, we conclude that up to a multiplicative constant, $% \eta$ is obtained from $\tilde\eta$ by a conformal quaternionic contact automorphism which proves the first claim of Theorem \ref{t:Yamabe}. From the conformal properties of the Cayley transform and the existence Theorem \cite{Va1} it follows that the minimum $\Upsilon(S^{4n+3},[{\tilde\eta}])$ is achieved by a smooth 3-contact form, {\ which due to the Yamabe equation is of constant qc-scalar curvature.} This completes the proof of Theorem \ref{t:Yamabe} a). The proof of part b) is reduced to part a) by "lifting" the analysis to the sphere via the Cayley transform. A point, which requires some analysis is that we actually obtain qc-structures on the whole sphere. This follows from the properties of the Kelvin transform which sends a solution of the Yamabe equation to a solution of the Yamabe equation, see \cite[Section 5.2]{IMV1} or \cite[Section 6.6]{IV2} for the details. A similar argument will be used in the proof of Theorem \ref{t:qcLiouville}. \subsection{The uniqueness theorem in a 3-Sasakin conformal class. } We mention that similarly similarly to the Riemannian and CR cases it is expected that the {qc-conformal} class of a unit volume qc-Einstein {structure} contains a unique metric of constant scalar curvature, with the exception of the 3-Sasakian sphere, see Section \ref{ss:Sasaki-Einstein uniqieness Yamabe} for a comparison with the CR {case}. What was problematic was the first step as outlined in Section \ref{ss:Sasaki-Einstein uniqieness Yamabe}, since here Theorem \ref{t:div formula} supplies the first step in dimension seven while the higher dimensional cases {was} open. {A proof extending the seven dimensional case can be found in \cite{IMV15a} where the reader can also find a proof of the uniqueness result.} \section{The CR Lichneorwicz and Obata theorems}\label{s:CR Lichnerowicz-Obata} In accordance with Convention \ref{convention}, in this section we use the non-negative definite sub-Laplacian, $\triangle u=-tr^g(\nabla^2 u)$ for a function $u$ on a strictly pseudoconvex pseudohermitian manifold $M$ with a Tanaka-Webster connection $\nabla$. Also, the divergence of a vector field is taken with a "-", hence we have $\triangle u=\nabla^*(\nabla u)=-\sum_{a=1}^{2n}\nabla^2u(e_a,e_a)$ for an orthonormal basis of the horizontal space. \subsection{The CR Lichneorwicz first eigenvalue estimate} From the sub-ellipticity of the sub-Laplacian on a strictly pseudoconvex CR manifold it follows that {on a compact manifold its spectrum is discrete}. It is therefore natural to ask if there is a sub-Riemannian version of Theorem \ref{t:Riem LichObata}. In fact, a CR analogue of the Lichnerowicz theorem was found by Greenleaf \cite{Gr} for dimensions $2n+1>5$, while the corresponding results for $n=2$ and $n=1$ were achieved later in \cite{LL} and \cite{Chi06}, respectively. As already observed in Theorem \ref{t:first eigenspace Iwasawa}, the standard Sasakian unit sphere has first eigenvalue equal to 2n with eigenspace spanned by the restrictions of all linear functions to the sphere, hence the following result is sharp. \begin{thrm}[\cite{Gr,LL,Chi06}]\label{t:CR Lich} Let $(M,\eta)$ be a compact strictly pseudoconvex pseudohermitian manifold of dimension $2n+1$ such that for some $k_0=const>0$ we have the Lichnerowicz-type bound \begin{equation} \label{condm-app} Ric(X,X)+ 4A(X,JX)\geq k_0 g(X,X), \qquad X\in H. \end{equation} If $n>1$, then any eigenvalue $\lambda$ of the sub-Laplacian satisfies $\lambda \ge \frac{n}{n+1}k_0$. If $n=1$ the estimate $\lambda \ge \frac{1}{2}k_0$ holds assuming in addition that the CR-Paneitz operator is non-negative , i.e., for any smooth function $f$ we have $ \int_M f\cdot Cf \, Vol_{\eta}\geq 0, $ where $Cf$ is the CR-Paneitz operator. \end{thrm} We recall that the fourth-order CR-Paneitz operator written in real coordinates is defined by the formula \begin{equation*} Cf=\sum_{a,b=1}^{2n}\nabla ^{4}f(e_a,e_a,e_{b},e_{b})+\sum_{a,b=1}^{2n}\nabla ^{4}f(e_a,Je_a,e_{b},Je_{b}) -4n\nabla^* A(J\nabla f)-4n\,g(\nabla^2 f,JA). \end{equation*} In view of the prominent role of the CR-Paneitz operator in the geometric analysis on a three dimensional CR manifold we pause for a moment to give an idea of several occurrences. We start with a few definitions, Given a function $f$ we define the one form, \[ P_{f}(X)=\sum_{b=1}^{2n}\nabla ^{3}f(X,e_{b},e_{b})+\sum_{b=1}^{2n}\nabla ^{3}f(JX,e_{b},Je_{b})+4nA(X,J\nabla f) \] so we have $Cf=-\nabla ^{\ast }P$. The CR Paneitz operator is called non-negative if $$ \int_M f\cdot Cf \, Vol_{\eta}=-\int_MP_f(\nabla f) \, Vol_{\eta} \geq 0, \qquad f\in \mathcal{C}^\infty_0(M). $$ In the three dimensional case the positivity condition is a CR invariant since it is independent of the choice of the contact form which follows from the conformal invariance of $C$ proven in \cite{Hi93}. In the case of vanishing pseudohermitian torsion, we have, up to a multiplicative constant, $C=\Box_b\bar\Box_b$, where $\Box_b$ is the Kohn Laplacian, hence the CR Paneitz operator is also non-negative. This property is in fact true for any $n>1$ which can be seen through the relation between the CR Paneitz operator and the $[1]$-component of the horizontal Hessian $(\nabla ^{2}f)(X,Y)$ found in \cite{L1,GL88}. For this, consider the tensor $B(X,Y)$ defined by \begin{equation*} B(X,Y)\equiv B[f](X,Y)=(\nabla ^{2}f)_{[1]}(X,Y) =\frac{1}{2}\left[ (\nabla ^{2}f)(X,Y)+(\nabla ^{2}f)(JX,JY)\right] \end{equation*} and also the completely traceless part of $B, \begin{equation*} B_{0}(X,Y)\equiv B_{0}[f](X,Y)=B(X,Y)+\frac{\triangle f}{2n} g(X,Y)-\frac{1}{2n}% g(\nabla^2f,\omega)\,\omega (X,Y). \end{equation*} Then we have the formula \cite{L1,GL88}, \begin{equation}\label{l:GrLee} \begin{aligned} \sum_{a=1}^{2n}(\nabla_{e_{a}} B_0)(e_{a},X) & =\frac{n-1}{2n}P_f(X), \\ \int_M |B_0|^2\, Vol_{\eta} & =-\frac{n-1}{2n}\int_M P_f(\nabla f)\, Vol_{\eta} =\frac{n-1}{2n}\int_M f\cdot (Cf)\, Vol_{\eta}. \end{aligned} \end{equation} In particular, if $n>1$ the CR-Paneitz operator is non-negative. As an application of this result, we recall \cite{L1}, see also \cite{BF74} and \cite{Be80}, according to which if $ n\geq 2$, a function $f\in \mathcal{C}^3(M)$ is CR-pluriharmonic, i.e, locally it is the real part of a CR holomorphic function, if and only if $% B_0[f]=0$. By \eqref{l:GrLee} only one fourth-order equation $Cf=0$ suffices for $B_0[f]=0$ to hold. When $n=1$ the situation is more delicate. In the three dimensional case, CR-pluriharmonic functions are characterized by the kernel of the third order operator $% P[f]=0 $ \cite{L1}. However, the single equation $Cf=0$ is enough again assuming the vanishing of the pseudohermitian torsion \cite{GL88}, see also \cite% {Gra83ab}. On the other hand, \cite{CCC07} showed that if the pseudohermitian torsion vanishes the CR-Paneitz operator is essentially positive, i.e., there is a constant $\Lambda>0$ such that \begin{equation*} \int_M f\cdot (Cf) \, Vol_{\eta} \geq \Lambda\int_M f^2 \, Vol_{\eta}. \end{equation*} for all real smooth functions $f\in (Ker\, C)^{\perp}$, i.e., $\int_M f\cdot \phi \, Vol_{\eta} =0$ if $C\phi=0$. In addition, the non-negativity of the CR-Paneitz operator is relevant in the embedding problem for a three dimensional strictly pseudoconvex CR manifold. In the Sasakian case, it is known that $M$ is embeddable, \cite{Le92}, and the CR-Paneitz operator is nonnegative, see \cite{Chi06}, \cite{CCC07}. Furthermore, \cite{ChChY13} showed that if the pseudohermitian scalar curvature of $M$ is positive and $C$ is non-negative, then $M$ is embeddable in some $\mathbb{C}^n$. After these preliminaries we are ready to sketch the proof of Theorem \ref{t:CR Lich}. \subsubsection{Proof of the CR Lichnerowicz type estimate}\label{ss:CR Lich est} We shall use real coordinates as in \cite{IVZ,IV2} and rely on the proof described in details in \cite[Section 8.3]{IVO} valid for $n\geq 1$. Not surprisingly, a key to the solution is the CR Bochner identity due to \cite{Gr}, \begin{equation} \label{bohh} -\frac12\triangle |\nabla f|^2=|\nabla df|^2-g(\nabla(\triangle f),\nabla f)+Ric(\nabla f,\nabla f)+2A(J\nabla f,\nabla f) + 4\nabla df(\xi,J\nabla f). \end{equation} The last term can be related to the traces of $\nabla^2 f$, \cite{Gr}, \[ \int_M\nabla^2f(\xi,J\nabla f)\, Vol_{\eta}=-\int_M\frac{1}{2n}g(\nabla^2f,\omega)^2 +A(J\nabla f,\nabla f) \, Vol_{\eta}\] and also using the CR-Paneitz operator \[ \int_M\nabla^2f(\xi,J\nabla f)\, Vol_{\eta} =\int_{M}-\frac{1}{2n}\left( \triangle f\right) ^{2}+A(J\nabla f,\nabla f)-\frac{1}{2n}P_f(\nabla f)\, Vol_{\eta}. \] Integrating the CR Bochner idenity (for arbitrary function $f$) and using the last two formulas for the term $\int_M\nabla^2f(\xi,J\nabla f)\, Vol_{\eta}$ we find \begin{multline* 0=\int_M Ric(\nabla f,\nabla f)+4A(J\nabla f,\nabla f)-\frac {n+1}{n}(\triangle f)^2\, Vol_{\eta}\\ +\int_M \left |(\nabla^2f)\right |^2 -\frac {1}{2n}(\triangle f)^2 -\frac {1}{2n}g(\nabla^2 f,\omega)^2\, Vol_{\eta} +\int_M\Big[-\frac {3}{2n}P(\nabla f)\Big]\, Vol_{\eta}. \end{multline*} Noticing that $\Big \{\frac{1}{\sqrt{2n}}% g,\ \frac{1}{\sqrt{2n}}\omega\Big\}$ is an orthonormal set in the $[1]$-space with non-zero traces, we have $$\left |(\nabla^2f)_{[0]}\right |^2 \overset{def}{=}\left |(\nabla^2f)\right |^2 -\frac {1}{2n}(\triangle f)^2 -\frac {1}{2n}g(\nabla^2 f,\omega)^2.$$ Let us assume at this point that $\triangle f=\lambda f$ and the "Ricci" bound \eqref{condm-app} to obtain the inequality \begin{equation* 0\geq\int_M\left (k_0 -\frac {n+1}{n}\lambda\right )|\nabla f|^2\, Vol_{\eta}+ \int_M\left |(\nabla^2f)_{[0]}\right |^2\, Vol_{\eta} -\frac {3}{2n}\int_MP_f(\nabla f)\, Vol_{\eta}, \end{equation*} which implies $ \lambda \ge \frac{n}{n+1}k_0$ with equality holding iff \begin{equation}\label{e:CR equality hessian} \nabla^2f= \frac {1}{2n}(\triangle f)\cdot g+\frac {1}{2n}g (\nabla^2f,\omega)\cdot\omega \end{equation} and $\int_M P_f(\nabla f)\, Vol_{\eta}=0$ taking into account the extra assumption for $n=1$. The proof of Theorem \ref{t:CR Lich} is complete. \subsection{The CR Obata type theorem }\label{ss:CR Obata} \begin{thrm}[\cite{IVO} ]\label{main11} Let $(M, \theta)$ be a strictly pseudoconvex pseudohermitian CR manifold of dimension $2n+1$ with a divergence-free pseudohermitian torsion, $\nabla^*A=0$. Assume, further, that $M$ is complete with respect to the associated Riemannian metric \eqref{hmetric}. If $n\geq 2$ and there is a smooth function $f\not\equiv 0$ whose Hessian with respect to the Tanaka-Webster connection satisfies \begin{equation}\label{e:hessian} \nabla ^{2}f(X,Y)=-fg(X,Y)-df(\xi )\omega (X,Y), \qquad X, Y \in H=Ker\, \theta, \end{equation} then up to a scaling of $\theta$ by a positive constant $(M,\theta)$ is the standard (Sasakian) CR structure on the unit sphere in $\mathbb{C}^{n+1}$. In dimension three, $n=1$, the above result holds provided the pseudohermitian torsion vanishes, $A=0$. \end{thrm} This is the best known result for a complete non-compact $M$ unlike the Riemannian and QC cases where the corresponding results are valid without any conditions on the torsion, see the paragraph after \eqref{e:Riem Hess eqn} and Theorem \ref{main2}. It should be noted that besides the Sasakian condition, when $n=1$, one can invoke assumptions such as the vanishing of the divergence of the torsion, the vanishing of the CR-Paneitz operator or the equality case in \eqref{condm-app}. We insist on the strongest assumption, which avoids a lot of the technicalities which appear when a combination of these assumptions are made while still achieving a (probably) non-optimal result. Results of this nature can be found by combining identities proven in \cite{IVO}. In the compact case, with the help of a clever integration argument \cite{LW,LW1} were able to complete the arguments of \cite{IVO,IV3} and remove the assumption of divergence free torsion $\nabla^*A=0$ for $n\geq 2$, while the case $n=1$ was completed in \cite{IV3}. Taking into account \eqref{e:CR equality hessian}, a consequence of these results is the Obata type theorem characterizing the case of equality in Theorem \ref{t:CR Lich}. We note that Theorem \ref{main11} actually shows that in the compact Sasakian case \eqref{e:hessian} characterizes the unit Sasakian sphere. This fact in addition to other results of \cite{IVO} reappeared in \cite{LW1}. \begin{thrm}[\cite{LW,LW1,IV3}]\label{t:CR Obata} Suppose $(M, J,\eta)$, $\dim M=2n+1$, is a compact strictly pseudoconvex pseudohemitian manifold which satisfies the Lichnerowicz-type bound \eqref{condm-app}. If $n\geq 2$, then $\lambda = \frac{n}{n+1}k_0$ is an eigenvalue iff up-to a scaling $(M, J,\eta)$ is the standard pseudo-Hermitian CR structure on the unit sphere in $\mathbb{C}^{n+1}$. If $n=1$ the same conclusion holds assuming in addition that the CR-Paneitz operator is non-negative, $C\geq 0$. \end{thrm} Some earlier papers which contributed to the proof of the above Theorem include S.-C. Chang \& H.-L. Chiu who proved the above Theorem in the Sasakian case for $n\geq 2$ in \cite{CC09a} and for $n=1$ in \cite{CC09b}. The non-Sasakian case, was considered by Chang, S.-C., \& C.-T. Wu in \cite{ChW2} assuming $ A_{\alpha\beta,\, \bar\beta}=0$ for $n\geq 2$ and $A_{\alpha\beta,\, \gamma\bar \gamma}=0$ and $A_{11,\, \bar 1}=0$ for $n=1$. $P_1 f=0$. Let us give an idea of the proof of Theorem \ref{main11} following \cite{IV3}. The first step is to show the vanishing of the Webster torsion $A$. We shall make clear where the cases $n=1$ and $n>1$ diverge. Using the Ricci identity \begin{multline}\label{e:CR XYxi Ricci} \nabla^3 f(X,Y,\xi)=\nabla^ 3 f (\xi,X,Y)+\nabla^2f (AX,Y)+\nabla^2f (X,AY) +(\nabla_X A)(Y,\nabla f)\\ +(\nabla_Y A)(X,\nabla f)-(\nabla_{\nabla f}) A( X,Y). \end{multline} in which we substitute the term $\nabla^ 3 f (\xi,X,Y)$ by its expression obtained after differentiating \eqref{e:hessian} we come to the next equation \cite[(3.3)]{IVO}, \begin{multline} \nabla ^{3}f(X,Y,\xi )=-df(\xi )g(X,Y)-(\xi ^{2}f)\omega (X,Y)-2fA(X,Y) \label{e:D3f extremal bis} +(\nabla_X A)(Y,\nabla f)+(\nabla_Y A)(X,\nabla f)\\ -(\nabla_{\nabla f} A)(X,Y), \end{multline} With the help of the Ricci identities, \eqref{e:hessian}, \eqref{e:CR Ricci 2-from} and \eqref{e:CR Ric type decomp} we obtain a formula for $R(X,Y,Z,\nabla f)$, \cite[(4.1)]{IVO} \begin{multline}\label{eqc1} R(Z,X,Y,\nabla f)=\Big[df(Z)g(X,Y)-df(X)g(Z,Y)\Big]+\nabla df(\xi ,Z)\omega (X,Y) -\nabla df(\xi ,X)\omega (Z,Y)\\-2\nabla df(\xi ,Y)\omega (Z,X)+A(Z,\nabla f)\omega (X,Y) -A(X,\nabla f)\omega (Z,Y), \end{multline}% which after taking traces gives identities for $Ric(X,\nabla f)$ and $Ric(JX,J\nabla f)$, \cite[(4.2)]{IVO}, \begin{equation}\label{eqc02} \begin{aligned} & Ric(Z,\nabla f)=(2n-1)df(Z)-A(JZ,\nabla f)-3\nabla df(\xi,JZ)\\ & Ric(JZ,J\nabla f)=df(Z)-(2n-1)A(JZ,\nabla f)-(2n+1)\nabla df(\xi,JZ). \end{aligned} \end{equation} Note that when $n=1$, $Ric(X,Y)=Ric(JX,JY)$, hence the identities for $Ric(X,\nabla f)$ and $Ric(JX,J\nabla f)$ \emph{coincide}, which is the reason for the assumption $n>1$ when $A\not=0$. For $n>1$, taking the $[-1]$ part of $R(.,.,X,Y)$ it follows \begin{equation}\label{e:vhessian} \nabla ^{2}f(Y,\xi )=df(JY)+2A(Y,\nabla f) \end{equation} Using the formula for the curvature \eqref{eqc1} we come to $|\nabla f|^{2}A(Y,Z) =df(Y)A(\nabla f,Z)-df(JY)A(\nabla f,JZ)$ found in the proof of \cite[Lemma 4.1]{IVO}. Hence, the Webster torsion is determined by $A(J\nabla f,\nabla f)$ as follows \begin{equation}\label{e:A formula} |\nabla f|^4A(X,Y)=-A(J\nabla f,\nabla f)\left [df(X)df(JY)+df(Y)df(JX) \right], \end{equation} which implies in particular, $A(\nabla f,\nabla f)=0$. On the other hand, from \eqref{e:vhessian} we have \cite[(4.9)]{IVO}, \begin{multline} \label{e:D3f extremal} \nabla ^{3}f(X,Y,\xi ) =-df(\xi )g(X,Y)+f\omega (X,Y)-2fA(X,Y)-2df(\xi )A(JX,Y) +2\nabla A(X,Y,\nabla f). \end{multline} For $n>1$, equations \eqref{e:D3f extremal} and \eqref{e:D3f extremal bis} imply the identity, see the formula in the proof of \cite[Lemma 4.3]{IVO}, \begin{multline}\label{e:old key identity} 2df(\xi )A(JX,Y)-(\nabla_{\nabla f} A)(X,Y)=(\xi ^{2}f)\omega (X,Y)+f\omega (X,Y)+(\nabla_X A)(Y,\nabla f) -(\nabla_Y A)(X,\nabla f). \end{multline} Notice that the left-hand side is symmetric why the right-hand is skew-symmetric, hence they both vanish, \begin{equation}\label{e:CR key identity} (\nabla_{\nabla f} A)(X,Y)=2df(\xi)A(JX,Y)\qquad \text{and }\qquad (\nabla_X A)(Y,\nabla f) =(\nabla_Y A)(X,\nabla f), \end{equation} taking into account $\nabla ^{2}f(\xi ,\xi )=-f+\frac 1n(\nabla^* A)(J\nabla f)=-f$, when $\nabla^*A=0$, which follows by taking a trace in the (vanishing) right-hand side of \eqref{e:old key identity}, see \cite[Lemma 4.3]{IVO}. \begin{rmrk}\label{r:CR cpct key} Notice that the first identity implies $g({\nabla f},\nabla|A|^2)=0$. \end{rmrk} The first equation of \eqref{e:CR key identity} gives $(\nabla_{\nabla f} A)(J\nabla f,\nabla f)=-2df(\xi)A(\nabla f,\nabla f)=0$ as we already showed above, see \eqref{e:A formula}. Finally, differentiating the identity $A(\nabla f,\nabla f)=0$ and using \eqref{e:hessian} we obtain $(\nabla_XA)(\nabla f,\nabla f)=2fA(\nabla f,X)-2df(\xi)A(J\nabla f,X)$ which shows $(\nabla_\nabla f A)(\nabla f,\nabla f)=-2df(\xi)A(J\nabla f,\nabla f)$. Therefore, $A(J\nabla f,\nabla f)=0$ which implies $|\nabla f|^4 A=0$. In order to conclude that $A=0$ we need to know that $f$ cannot be a local constant. For this and other facts we turn to the next step of the proof, where we show that $f$ satisfies an elliptic equation for which we can use a unique continuation argument. We remark that the corresponding sub-elliptic result seems to be unavailable. Next, we observe that if $f$ satisfies \eqref{e:hessian}, then $f$ satisfies an elliptic equation \cite[Corollary 4.5 \& Lemma 5.1]{IVO}, \begin{align}\label{e:lap n >1} \triangle^h f & =\triangle f- \nabla ^{2}f(\xi ,\xi )=(2n+1)f-\frac 1n(\nabla^* A)(J\nabla f), \qquad \text{ if }\ n>1, \\\label{e:lap n=1} \triangle^h f & = \left ( 2+ \frac{S-2}{6} \right ) f -\frac{{1}}{12}g(\nabla f,\nabla S) +\frac 13(\nabla^* A)(J\nabla f), \qquad \text{ if }\ n=1, \end{align} where $\triangle^h $ is the Riemannian Laplacian associated to the Riemannian metric $h=g+\eta^2$ on $M$. In particular, $f$ cannot be a local constant. The equations follows from the formula relating the Levi-Civita and the Tanaka-Webster connections, see \cite[Lemma~1.3]{DT} and \cite[(4.15)]{IVO}, which shows \begin{equation}\label{obsa} -\triangle^h f =-\triangle f+ (\xi^2 f). \end{equation} When $n>1$ equation \eqref{e:lap n >1} follows taking into account (the line after) equation \eqref{e:CR key identity}. The case $n=1$ requires some further calculations for which we refer to \cite[Lemma 5.1]{IVO}. The final step of the prove of Theorem~\ref{main11} is a reduction to the corresponding Riemannian Obata theorem on a complete Riemannian manifold. In fact, we will show that the Riemannian Hessian computed with respect to the Levi-Civita connection $D$ of the metric $h$ satisfies \eqref{e:Riem Hess eqn} and then apply the Obata theorem to conclude that $(M,h)$ is isometric to the unit sphere. We should mention the influence of \cite{CC09a,CC09b} where the compact Sasakian case is reduced to Theorem \ref{t:Riem LichObata}. For $n>1$ where we proved that $A=0$ we showed the validity of the next two identities \begin{equation}\label{e:vhessianc} \nabla ^{2}f(\xi ,Y)=\nabla ^{2}f(Y,\xi )=df(JY), \qquad \xi^2f=-f. \end{equation} Next, we show that \eqref{e:vhessianc} also holds in dimension three when the pseudohermitian torsion vanishes. In the three dimensional case we have $Ric(X,Y)=\frac{S}2g(X,Y)$. After a substitution of this equality in \eqref{eqc02}, taking into account $A=0$, we obtain \begin{equation}\label{hes3} \nabla^2f(\xi,Z)=\nabla^2f(Z,\xi)=\frac{(S-2)}6df(JZ). \end{equation} Differentiating \eqref{hes3} and using \eqref{e:hessian} we find \begin{equation}\label{hes31} \nabla^3f(Y,Z,\xi)=\frac16\Big[dS(Y)df(JZ)+(S-2)f\omega(Y,Z)\Big] -\frac 16(S-2)df(\xi)g(Y,Z). \end{equation} On the other hand, setting $A=0$ in \eqref{e:D3f extremal bis}, we have \begin{equation}\label{hes32} \nabla^3f(Y,Z,\xi)=-df(\xi)g(Y,Z)-(\xi^2f)\omega(Y,Z). \end{equation} In particular, the function $\xi f$ also satisfies \eqref{e:hessian}. From \eqref{e:lap n=1} using again unique continuation $\xi f\not=0$ almost everywhere since otherwise $\nabla f=0$ taking into account \eqref{hes3}, hence $f\equiv 0$, which is not possible by assumption. Now, \eqref{hes31} and \eqref{hes32} give \begin{equation}\label{hes33} \frac{S-8}6df(\xi)g(Y,Z)-\Big(\xi^2f+\frac{S-2}6\Big)\omega(Y,Z)-\frac16dS(Y)df(JZ)=0, \end{equation} which implies \frac{S-8}3df(\xi)|\nabla f|^2=0. $ Thus, the pseudohermitian scalar curvature is constant, $S=8$, invoking again the local non-constancy. Equation \eqref{hes33} reduces then to $\Big(\xi^2f+\frac{S-2}6\Big)\omega(Y,Z)=0$ since $dS=0$ which yields $\xi^2f=-f.$ The latter together with \eqref{hes3} and $S=8$ imply the validity of \eqref{e:vhessianc} in dimension three. Finally, we use the relation between $D$ and $\nabla$, \cite[Lemma~1.3]{DT} which in the case $A=0$ simplifies to \begin{equation}\label{wh} D_BC=\nabla_BC+\theta(B)JC+\theta(C)JB-\omega(B,C)\xi, \quad B,C\in \mathcal{T}(M), \end{equation} where $J$ is extended with $J\xi=0$. Using \eqref{wh} together with \eqref{e:hessian} and \eqref{e:vhessianc} we calculate easily that \eqref{e:Riem Hess eqn} holds. The proof of Theorem~\ref{main11} is complete. \subsection{Proof of the Obata CR eigenvalue theorem in the compact case}\label{ss:CR cpct Obata proof} Turning to Theorem \ref{t:CR Obata} we mention that when $n=1$ we need to find an alternative way to the 'missing" equation \eqref{e:vhessian}, see the remark after \eqref{eqc02}. In fact, in \cite[Lemma 5.1]{IV3}] it was shown that in this case (assuming the Lichnerowicz' type condition), if $\triangle f=2f$ then we have $A(\nabla f,\nabla f)=0$ and \eqref{e:vhessian} holds true. This is proved with an integration (using the compactness!) of the "vertical Bochner" formula \cite[Remark 3.5]{IVO} \begin{equation} \label{e:vertical Bochner} -\triangle (\xi f)^2 = 2|\nabla(\xi f)|^2-2df(\xi)\cdot \xi (\triangle f) + 4df(\xi)\cdot g(A,\nabla^2 f) -4df (\xi)(\nabla^*A)(\nabla f). \end{equation} At this point, identities \eqref{eqc02}-\eqref{e:CR key identity} are available for $n\geq 1$, which imply in particular the identity in Remark \ref{r:CR cpct key} holds true. We come to the idea of \cite[Lemma 4]{LW1} where integration by parts involving suitable powers of $f$ are used in order to conclude $A=0$. By Remark \ref{r:CR cpct key} we have for any $k>0$ \begin{equation}\label{e:CR1} g(\nabla f,\nabla|A|^k) =0, \end{equation} while the Lichnerowicz' condition implies \textit{point-wise} the inequality $A(\nabla f,J\nabla f)\leq 0,$ hence \begin{equation}\label{e:CR2} |\nabla f|^2 |A|=-\sqrt{2} A(\nabla f, J\nabla f). \end{equation} Next we shall use an integration by parts argument similar to \cite{LW1} for the case $n=1$, i.e., $\triangle f=2f$. Using \eqref{e:CR1}, we have $${I\overset{def}{=}\int_M |A|^3f^{2(k+1)}\, Vol_{\eta}}=-\frac {1}{2}\int_M |A|^3f^{2k+1}\triangle f \, Vol_{\eta}=\frac {2k+1}{2}{\int_M |A|^3f^{2k}|\nabla f|^2} \, Vol_{\eta}\equiv\frac {2k+1}{2}{D}.$$ From \eqref{e:CR2} it follows \begin{multline*} \sqrt{2}(2k+1) \overset{def}{=}-\int_M|A|^2f^{2k+1}(\nabla^* A)(J\nabla f)\, Vol_{\eta} \leq ||\nabla^* A||\int_M |A|^2f^{2k+1}|\nabla f|\, Vol_{\eta}\\ \leq \frac {||\nabla^* A||}{a}\int_M f^{k+1}\,f^k|\nabla f|\, |A|^3\, Vol_{\eta}, \end{multline*} assuming $|A|\geq a>0$ so $|A|^2\leq\frac {1}{a}|A|^3$. Now, H\"older's inequality gives \begin{multline*} \sqrt{2}(2k+1) \leq \frac {||\nabla^* A||}{a}\Big ({\int_M |A|^3f^{2(k+1)}\, Vol_{\eta}}\Big)^{1/2}\, \Big( {\int_M |A|^3f^{2k}|\nabla f|^2\, Vol_{\eta} } \Big)^{1/2}\\%$\newlin =\frac {||\nabla^* A||}{a} \, \Big( \frac {2k+1}{2}D\Big)^{1/2}\,D^{1/2}=\frac {||\nabla^* A||}{a} \Big( \frac{2k+1}{2}\Big)^{1/2}D{.} \end{multline*} By taking $k$ sufficiently large { we} conclude $A=0$. The assumption $|A|\geq a>0$ can be removed by employing a suitable cutt-off function, see \cite{LW1}. Once we know that $M$ is Sasakian one applies \cite{CC09a,CC09b} where a reduction to Theorem \ref{t:Riem LichObata} is made. \section{The Quaternionic Contact Lichnerowicz and Obata theorems}\label{s:QC Lichnerowicz-Obata} This section concentrates on the qc versions of the Lichnerowicz and Obata eigenvalue theorems. As in the Riemannian and CR cases we are dealing with a sub-elliptic operator, hence the discreteness of its spectrum on a compact qc manifold. The Lichnerowicz' type result was found in \cite{IPV1} in dimensions grater than seven and in \cite{IPV2} in the seven dimensional case. Remarkably, compare with the CR case, the Obata type theorem characterizing the 3-Sasakian sphere through the horizontal Hessian equation holds under no extra assumptions on the Biquard' torsion { when the dimension of the qc manifold is at least eleven} as proven in \cite{IPV3}. The general qc Obata result in dimension seven remains open. We shall use freely the curvature and torsion tensors associated to a given qc structure as defined in Section \ref{ss:qc geometry}. As in the previous sections where eigenvalues were concerned we shall use the non-negative sub-Laplacian, $\triangle u=-tr^g(\nabla^2 u)$. \subsection{The QC Lichnerowicz theorem} \begin{thrm}[\cite{IPV1,IPV2}] \label{mainpan} Let $(M,\eta)$ be a compact QC manifold of dimension $4n+3$. Suppose, for $\alpha_n=\frac {2(2n+3)}{2n+1}, \quad \beta_n=\frac {4(2n-1)(n+2)}{(2n+1)(n-1)}$ and for any $ X\in H$ $$\mathcal{L}(X,X)\overset{def}{=}2 Sg(X,X)+\alpha_n T^0(X,X) +\beta_n U(X,X)\geq 4g(X,X).$$ If $n=1$, assume in addition the positivity of the $P$-function of any eigenfunction. Then, any eigenvalue $\lambda$ of the sub-Laplacian $\triangle$ satisfies the inequality $\lambda \ge 4n. \end{thrm} The 3-Sasakian sphere achieves equality in the Theorem. The eigenspace of the first non-zero eigenvalue of the sub-Laplacian on the unit 3-Sasakian sphere in Euclidean space is given by the restrictions to the sphere of all linear functions by Theorem \ref{t:first eigenspace Iwasawa}. \subsubsection{The QC P-function} We turn to the definition of the QC P-function defined in \cite{IPV2}. For a fixed smooth function $f$ we define a one form $P\equiv P_f \equiv P[f]$ on $M$, which we call the $P-$form of $f$, by the following equation \begin{equation*} P_f(X) =\sum_{b=1}^{4n}\nabla ^{3}f(X,e_{b},e_{b})+\sum_{t=1}^{3}\sum_{b=1}^{4n}\nabla ^{3}f(I_{t}X,e_{b},I_{t}e_{b}) -4nSdf(X)+4nT^{0}(X,\nabla f)-\frac{8n(n-2)}{n-1% }U(X,\nabla f). \end{equation*} The $P-$function of $f$ is the function $P_f(\nabla f)$. The $C-$operator is the fourth-th order differential operator on $M$, which is independent of $f$, $ f\mapsto Cf =-\nabla^* P_f=\sum_{a=1}^{4n}(\nabla_{e_a} P_f)\,(e_a). $ We say that the $P-$function of $f$ is non-negative if $$\int_M f\cdot Cf \, Vol_{\eta}= -\int_M P_f(\nabla f)\, Vol_{\eta}\geq 0. $ If the above holds for any $f\in \mathcal{C}^\infty_o\,(M)$ we say that the $C-$operator is \emph{non-negative}, $C\geq 0$. Several important properties of the C-operator were found in \cite{IPV2}. The first notable fact is that the $C-$operator is non-negative, $C\geq 0$, for $n>1$. Furthermore $Cf=0$ iff $(\nabla^2f)_{[3][0]}(X,Y)=0$, {where [3][0] denotes the trace-free part of the [3]-part of the Hessian.} In this case the $P-$form of $f$ vanishes as well. The key for the last result is the identity $\sum_{a=1}^{4n}(\nabla_{e_a}(\nabla^2f)_{[3][0]})(e_a,X)=\frac{n-1}{4n}P_f(X)$, hence $$\frac{n-1}{4n}\int_Mf\cdot Cf\, Vol_{\eta}=-\frac{n-1}{4n}\int_MP_f(\nabla f)\, Vol_{\eta}=\int_M|(\nabla^2f)_{[3][0]}|^2\, Vol_{\eta},$$ after using the Ricci identities, the divergence formula and the orthogonality of the components of the horizontal Hessian. In dimension seven, the condition of non-negativity of the $C-$operator is also non-void. For example, \cite{IPV2} showed that on a 7-dimensional compact qc-Einstein manifold with $Scal\geq 0$ the $P-$function of an \emph{eigenfunction} of the sub-Laplacian is non-negative. The proof relies on several results. First, the qc-scalar curvature of a qc-Einstein is constant \cite{IMV,IMV3}. In the higher dimensions this follows from the Bianchi identities. However, the result is very non-trivial in dimension seven where the qc-conformal curvature tensor $W^{qc}$, see Theorem \ref{T:flat}, is invoked in the proof. Secondly, on a qc-Einstein {{manifold we have} $\nabla ^{3}f(\xi _{s},X,Y)=\nabla^{3}f(X,Y,\xi _{s})$, the vertical space is integrable and we have $\nabla ^{2}f(\xi _{k},\xi _{j})-\nabla ^{2}f(\xi _{j},\xi _{k})=-Sdf(\xi _{i})$. Finally, a calculation shows $\int_M|P_f|^2\, Vol_{\eta}=-(\lambda+4S)\int_M P_f(\nabla f) \, Vol_{\eta}$, which implies the claim. At this point we can give the main steps in the proof of the Lichnerowicz' type theorem following the $P-$function approach of \cite{IPV3} which unified the seven and higher dimensional cases of \cite{IPV1,IPV2} as we did in the CR case in the proof of Theorem \ref{t:CR Lich}. By the QC Bochner identity {established in \cite{IPV1}}, letting $R_f=\sum_{s=1}^3\nabla^2f(\xi_s,I_s\nabla f)$, we have \begin{multline*} -\frac12\triangle |\nabla f|^2=|\nabla^2f|^2-g\left (\nabla (\triangle f), \nabla f \right )+2(n+2)S|\nabla f|^2+2(n+2)T^0(\nabla f,\nabla f) +2(2n+2)U(\nabla f,\nabla f)\\ + 4R_f. \end{multline*} The "difficult" term $R_f$ can be computed in two ways. First with the help of the $P$-function we have $$\int_M R_f Vol_{\eta}=\int_M -% \frac{1}{4n}P_n(\nabla f)-\frac{1}{4n}(\triangle f)^2-S|\nabla f|^2 Vol_{\eta}+\int_M\frac{% n+1}{n-1}U(\nabla f,\nabla f)\, Vol_{\eta}.$ On the other hand, using Ricci's identities g(\nabla^2f , \omega_s) \overset{def}{=}\sum_{a=1}^{4n}\nabla^2f(e_a,I_se_a)=-4ndf(\xi_s), we have $$\int_MR_f Vol_{\eta}=-\int_M% \frac {1}{4n}\sum_{s=1}^3 g(\nabla^2 f, \omega_s)^2 +T^0(\nabla f,\nabla f)-3U(\nabla f,\nabla f) % \, Vol_{\eta}.$$ A substitution of a linear combination of the last two identities in the QC Bochner identity shows \begin{multline* 0=\int_M|\nabla^2f|^2-\frac{1}{4n}\Big[(\triangle f)^2+\sum_{s=1}^{3}[g(\nabla^2f,\omega_s)]^2\Big]-\frac{3}{4n}P_n(\nabla f)Vol_{\eta} \\ + \frac{2n+1}{2}\int_M \mathcal{L}(\nabla f,\nabla f)-% \frac{\lambda}{n}|\nabla f|^2\,Vol_{\eta}. \end{multline*} With the Lichnerowicz type assumption, $\mathcal{L}(\nabla f,\nabla f)\geq 4|\nabla f|^2$, it follows \begin{equation*} \label{intin} 0\geq\int_M |(\nabla^2f)_{0}|^ -\frac{3}{4n}P_n(\nabla f)Vol_{\eta} +% \frac{2n+1}{2n}\int_M\Big(4n -\lambda\Big)|\nabla f|^2% Vol_{\eta}. \end{equation*} For $n=1$, when $U=0$ trivially, { the formulas are still correct even after removing formally} the torsion tensor $U$ terms, which completes the proof of Theorem \ref{mainpan}. \subsection{The QC Obata type theorem} \begin{thrm}[\cite{IPV3} \label{main2} Let $(M,\eta)$ be a quaternionic contact manifold of dimension $4n+3>7$ which is complete with respect to the associated Riemannian metric h=g+(\eta_1)^2+(\eta_2)^2+(\eta_3)^2. $ There exists a smooth $f\not\equiv$const, such that, $ \nabla df(X,Y)=-fg(X,Y)-\sum_{s=1}^{3}df(\xi _{s})\omega _{s}(X,Y{)} $ if and only if the qc manifold $(M,\eta,g,\mathbb{Q})$ is qc homothetic to the unit 3-Sasakian sphere. \end{thrm} It should be noted that {in dimension seven the problem is still open}. The above theorem suffices to charcaterize the cases of equality in Theorem \ref{mainpan} for $n>1$. \begin{thrm}[\cite{IPV3} Let $(M,\eta)$ be a compact QC manifold of dimension $4n+3$ which satisfies a Lichnerowicz' type bound $\mathcal{L}(X,X)\geq 4g(X,X)$. Then, there is a function $f$ with $\triangle f=4n f$ if and only if \begin{itemize} \item when $n>1$, $M$ is qc-homothetic to the 3-Sasakian sphere; \item when $n=1$, and $M$ is qc-Einstein, i.e., $T^0=0$, then $M$ is qc-homothetic to the 3-Sasakian sphere. \end{itemize} \end{thrm} {Next, we give an outline of the key steps in the proof of Theorem \ref{main2}.} \textbf{Part 1}. The first step is to show that $T^0=0$ and $U=0$, i.e., $M$ is qc-Einstein. This is achieved by the following argument. First we determine the remaining parts of the Hessian of $f$ with respect to the Biquard connection in terms of the torsion tensors. A simple argument shows that $T^0(I_s\nabla f,\nabla f)=U(I_s\nabla f,\nabla f)=0$. Using the $[-1]$-component of the curvature tensor it follows $T^0(I_s\nabla f,I_t\nabla f)=0$, $s,\, t\in \{1,2,3\}$, $s\not=t$. Then we determine the torsion tensors $T^0$ and $U$ in terms of $\nabla f$ and the tensor $U(\nabla f,\nabla f)$. For example, $$ |\nabla f|^{4}T^{0}(X,Y)=-\frac{2n}{n-1}U(\nabla f,\nabla f)\Big[% 3df(X)df(Y)-\sum_{s=1}^{3}df(I_{s}X)df(I_{s}Y)\Big].$$ Next, we prove formulas of the same type for $\nabla T^0$ and $\nabla U$. In particular we have $$ (\nabla _{\nabla f}U)(X,Y)=\frac{2(n-1)}{n+2}fU(X,Y).$$ \begin{rmrk}\label{r:qc obata cpct using riem} We pause for a moment to remark that the last equation shows in particular $L_{\nabla f}|U|^2=\frac{4(n-1)}{n+2}f|U|^2$ \emph{as in the Riemannian case for $Ric_0$}. Hence, in the compact case we can use an integration as in Proposition \ref{p:Obata Einstein} to see the vanishing of $U$, hence {also} of $T^0$ by what we have proved. \end{rmrk} By what we already proved, the crux of the matter is the proof that $U(\nabla f,\nabla f)=0$ (or $T^0(\nabla f,\nabla f)=0$). This fact is achieved with the help of the Ricci identities, the contracted Bianchi second identity and many properties of the torsion of a qc-manifolds: $0=\nabla ^{3}f(\xi_i,I_i\nabla f,\nabla f)-\nabla ^{3}f(I_{i}\nabla f,\nabla f,\xi _{i})=\frac{2}{n+2}fU(\nabla f,\nabla f)$. We finish {taking into account} that $|\nabla f|\not=0$ a.e. using a unique continuation {argument} by showing that on a qc manifold with $n>1$, the "horizontal Hessian equation" implies that $f$ satisfies an elliptic partial differential equation, \begin{equation} \label{llex} \triangle^h f=(4n+3)f+\frac{n+1}{n(2n+1)}(\nabla_{e_a}T^0)(e_a,\nabla f)+% \frac{3}{(2n+1)(n-1)}(\nabla_{e_a}U)(e_a,\nabla f). \end{equation} \textbf{Part 2}: By Part 1 it suffices to consider the case of a qc-Einstein structure, in which case we proceed as follows. First, we show that $(M,h)$ ($h$- Riemannian metric!) is isometric to the unit round sphere by showing that $(\nabla^h)^2f(X,Y)=-fh(X,Y)$ and using Obata's result, see Remark \ref{r:sphere charcat} and the paragraph preceding it. Next, we show qc-conformal flatness. For this we use the form of the curvature of the round sphere, $R^h(A,B,C,D)=h(B,C)h(A,D)-h(B,D)h(A,C)$ and the relation between the Riemannian and Biquard curvatures and then the formula for $W^{qc}(X,Y,Z,V)$ which simplifies considerably in the qc-Einstein case. Finally, we employ a standard monodromy argument showing that $(M,g,\eta,\mathbb Q)$ is qc-conformal to ${S^{4n+3}}$, i.e., we have $\eta=\kappa \Psi F^*\tilde\eta$ for some diffeomorphism $F:M\rightarrow S^{4n+3}$, $0<\kappa\in \mathcal{C}^\infty(M)$, and $\Psi\in \mathcal{C}^\infty(M:SO(3))$ We conclude the proof of the qc-conformality with the 3-Sasakian sphere by invoking the qc-Liouville theorem \ref{t:qcLiouville}. Finally, a comparison of the metrics on $H$ show the desired homothety. We should mention that an alternative to the use of the qc-conformal curvature tensor in Step 2 was found in \cite{BauKim14} where once the isometry with the round sphere is established the authors invoke the classification of Riemannian submersions with totally geodesic fibers of the sphere due to Escobales \cite{Esc76}. Since the above proof used the qc-Liouville theorem and because of its independent interest we devote a short section to it. \subsection{The QC Liouville theorem}\label{ss:qc liuouville} \begin{thrm}\label{t:qcLiouville} Every qc-conformal transformation between open subsets of the 3-Sasakian unit sphere is the restriction of a global qc-conformal transformation. \end{thrm} This result is proved in the more general setting of parabolic {geometries} in \cite{CS09}. Here we give a relatively self-contained proof of a version of Liouville's theorem in the case of the quaternionic Heisenberg group and the 3-Sasakian sphere equipped with their standard qc structures. The proof is related to the QC Yamabe problem on the 3-Sasakian sphere since a key step is provided by the proof of \cite[Theorem 1.1]{IMV} in which all qc-Einstein structures qc-conformal to the standard qc-structure on the quaternionic Heisenberg group (or sphere) were determined. Thus, our proof of Theorem \ref{t:qcLiouville} establishes the local Liouville type property in the setting of a sufficiently smooth qc-conformal maps relying only on the qc geometry. A very general version of the Liouville theorem was also {proved} by Cowling, M., \& Ottazzi, A., see \cite{CO13}. In the Euclidean case Liouville \cite{Liu1850}, \cite{Liu1850b} showed that every sufficiently smooth conformal map ($\emph{C}^4$ in fact) between two connected open sets of the Euclidean space $\mathbb{R}^3$ is necessarily a restriction of a M\"obius transformation. The latter is the group generated by translations, dilations and inversions of the extended Euclidean space obtained by adding an ideal point at infinity. Liouville's result generalizes easily to any dimension $n>3$. Subsequently, Hartman \cite{Har58} gave a proof requiring only $\mathcal{C}^1$ smoothness of the conformal map, see also \cite{Ne60}, \cite{BoIw82}, \cite{Ja91}, \cite{IwMa98} and \cite{Fr03} for other proofs. A CR version of Liouville's result can be found in \cite{Ta62} and \cite{Al74}. Thus, a smooth CR diffeomorphism between two connected open subsets of the $2n+1$ dimensional sphere is the restriction of an element from the isometry group $SU(n+1,1)$ of the unit ball equipped with the complex hyperbolic metric. The proof of Alexander \cite{Al74} relies on the extension property of a smooth CR map to a biholomorphism. Tanaka, see also \cite{Poi07}, \cite{Car} and \cite{ChM}, in his study of pseudo-conformal equivalence between analytic real hypersurfaces of complex space showed a more general result \cite[Theorem 6]{Ta62} showing that any pseudo-conformal homeomorphism between connected open sets of the quadric \[ -\sum_{i=1}^r |z_i|^2+\sum_{i=r+1}^n |z_i|^2 =1, \quad (z_1,\dots,z_n)\in \mathbb{C}^n, \] is the restriction of a projective transformation of $P^{n}(\mathbb{C})$. Another theory began with the introduction of quasiconformal maps \cite{Ge62} and \cite{Re67}, which imposed metric conditions on the maps, and with the works of Mostow \cite{M} and Pansu \cite{P}. In particular, in \cite{P} it was shown that every global 1-quasiconformal map on the sphere at infinity of each of the hyperbolic metrics is an isometry of the corresponding hyperbolic space. The local version of the Liouville's property for 1-quasiconformal map of class $\emph{C}^4$ on the Heisenberg group was settled in \cite{KoRe85} by a reduction to the CR result. The optimal regularity question for quasiconformal maps was settled later by Capogna \cite{Cap97} in much greater generality including the cases of all Iwasawa type groups, see also \cite{Ta96} and \cite{CC06}. A closely related property is the so called rigidity property of quasiconformal or {multicontact} maps, also referred to as Liouville's property, but where the question is the finite dimensionality of the group of (locally defined) quasiconformal or {multicontact} maps, see \cite{Ya93}, \cite{Reimann01}, \cite{CMKR05}, \cite{Ot05}, \cite{Ot08},\cite{Mor09}, \cite{dMO10}, \cite{lDOt11}. Besides the Cayley transform, we shall need the generalization of the Euclidean inversion transformation to the qc setting. We recall that in \cite{Ko1} Kor\'anyi introduced such an inversion and an analogue of the Kelvin transform on the Heisenberg group, which were later generalized in \cite{CK} and \cite{CDKR} to all groups of Heisenberg type. The inversion and Kelvin transform enjoy useful properties in the case of the four groups of Iwasawa type of which $\boldsymbol {G\,(\mathbb{H})}$ is a particular case. For our goals it is necessary to show that the inversion on $\boldsymbol {G\,(\mathbb{H})}$ is a qc-conformal map. In order to prove this fact we shall represent the inversion as the composition of two Cayley transforms, see \cite{IMV1,IV2} where the seven dimensional case was used. Let $P_1=(-1,0)$ and $P_2=(1,0)$ be correspondingly the 'south' and 'north' poles of the unit sphere ${S^{4n+3}}\ =\ \{\abs{q}^2+\abs{p}^2=1 \}\subset \mathbb{H}^n}%{\boldsymbol {G\,(\mathbb{C})}\times\mathbb{H}$. Let $\mathcal{C}_1$ and $\mathcal{C}_2$ be the corresponding Cayley transforms defined, respectively, on $S^{4n+3}\setminus\{P_1\}$ and $S^{4n+3}\setminus\{P_2\}$. Note that $\mathcal{C}_1$ was defined in \eqref{e:Cayley transf ctct form}% , while $\mathcal{C}_2$ is given by $(q', p')\ =\ \mathcal{C}_2\ \Big ((q, p)\Big)$, q'\ =\ -(1-p)^{-1} \ q,\quad p'\ =\ (1-p)^{-1} \ (1+p), \qquad (q,p)\in S^{4n+3}\setminus\{P_2\} The inversion on the quaternionic Heisenberg group (with center the origin) with respect to the unit gauge sphere is the map \begin{equation}\label{d:inversion} \sigma=\mathcal{C}_2\circ\mathcal{C}_1^{-1}:\boldsymbol {G\,(\mathbb{H})}\setminus \{ (0,0)\} \rightarrow\boldsymbol {G\,(\mathbb{H})}\setminus \{ (0,0)\}. \end{equation} In particular, $\sigma\ =\ \mathcal{C}_2\circ\mathcal{C}_1^{-1} $ is {an involution} on the group. A small calculation shows that $\sigma$ is given by the formula (in the Siegel model) q^*\ =\ -p'^{-1}\, q',\qquad p^*\ =\ p'^{-1}, or, equivalently, in the direct product model $\boldsymbol{G\,(\mathbb{H})}$ \begin{equation*} q^*\ =\ -(|q'|^2-\omega')^{-1}\, q', \qquad \omega^*\ =\ -\frac {\omega'% }{|q'|^4+|\omega'|^2}. \end{equation*} It follows \begin{equation} \begin{aligned} \sigma ^*\ \Theta\ =\ \frac {1}{|p'|^2} \,\bar\mu\, \Theta\, \mu, \qquad \mu\ =\ \frac {p'}{|p'|},\qquad \text{ (in the Siegel model)}\\ \sigma ^*\ \Theta\ =\ \frac {1}{|q'|^4+|\omega'|^2} \,\bar\mu\, \Theta\, \mu, \qquad \mu\ =\ \frac {|q'|^2+\omega'}{\left ( |q'|^4+|\omega'|^2\right)^{1/2}},\qquad \text{ (in the product model)}, \end{aligned} \end{equation} which, shows the following fundamental fact. \begin{lemma}\label{l:inversion qc} The inversion transformation \eqref{d:inversion} is a qc-conformal transformation on the quaternionic Heisenberg group. \end{lemma} As usual, using the dilations and translations on the group, it is a simple matter to define an inversion with respect to any gauge ball. Turning to the proof of Theorem \ref{t:qcLiouville} let $\Sigma\not={S^{4n+3}}$, noting that in the case $\Sigma={S^{4n+3}}$ there is nothing to prove. We shall transfer the analysis to the quaternionic Heisenberg group using the Cayley transform, thereby reducing to the case of a qc-conformal transformation $\tilde F:\tilde \Sigma \rightarrow \boldsymbol {G\,(\mathbb{H})}$ between two domains of the quaternionic Heisenberg group such that $\Theta={\tilde F}^*\tilde\Theta= \frac{1}{2\phi} \tilde\Theta$ for some positive smooth function $\phi$ defined on the open set $\tilde\Sigma$. By its definition $\Theta$ is a qc-Einstein structure of vanishinq qc-scalar curvature, hence Theorem \ref{t:einstein preserving} shows $\sigma=0$ and $F$ is a composition of a translation, cf. \eqref{e:H-type Iwasawa groups}, followed by an inversion and a homothety, cf. Lemma \ref{l:inversion qc}. The above analysis implies that $F$ is the restriction of an element of $PSp(n+1,1)$. This completes the proof of Theorem \ref{t:qcLiouville}. { Similarly to the Riemannian and CR cases, see \cite{Ku49}, \cite[Theorem VI.1.6]{SchYa} and \cite{BS76}, Theorem \ref{t:qcLiouville} and a standard monodromy type argument show the validity of the next} \begin{thrm}\label{t:conf sphere} If $(M,\eta)$ is a simply connected qc-conformally flat manifold of dimension $4n+3$, then there is a qc-conformal immersion $\Phi:M\rightarrow {S^{4n+3}}$, where ${S^{4n+3}}$ is the 3-Sasakian unit sphere in the $(n+1)$-dimensional quaternion space. \end{thrm} \section{Heterotic string theory relations }\label{s:strings} The seven dimensional quaternionic Heisenberg group $\boldsymbol {G\,(\mathbb{H})}$ has applications in the construction of non-trivial solutions to the so called \emph{Strominger system} in supersymmetric heterotic string theory. The bosonic fields of the ten-dimensional supergravity which arises as low energy effective theory of the heterotic string are the spacetime metric $g$, the NS three-form field strength (flux) $H$, the dilaton $\phi$ and the gauge connection $A$ with curvature 2-form $F^A$. The bosonic geometry is of the form $\mathbb{R}^{1,9-d}\times M^d$, where the bosonic fields are non-trivial only on $M^d$, $d\leq 8$. One considers the two connections $ \nabla^{\pm}=\nabla^g \pm \frac12 H, $ where $\nabla^g$ is the Levi-Civita connection of the Riemannian metric~$g$. Both connections preserve the metric, $\nabla^{\pm}g=0$ and have totally skew-symmetric torsion $\pm H$, respectively. We denote by $R^g,R^{\pm}$ the corresponding curvature. The Green-Schwarz anomaly cancellation condition up to the first order of the string constant $\alpha^{\prime }$ reads \begin{equation} \label{acgen} dH=\frac{\alpha^{\prime }}48\pi^2(p_1(\nabla^-)-p_1(E))=\frac{\alpha^{\prime }}4 % \Big(Tr(R^-\wedge R^-)-Tr(F^A\wedge F^A)\Big), \end{equation} where $p_1(\nabla^-)$ and $p_1(E)$ are the first Pontrjagin forms with respect to a connection $\nabla^-$ with curvature $R$ and a vector bundle $E$ with connection $A$. A heterotic geometry preserves supersymmetry iff in ten dimensions there exists at least one Majorana-Weyl spinor $\epsilon$ such that the following Killing-spinor equations hold \cite{Str,Berg} \begin{equation} \label{sup1} \nabla^+\epsilon=0, \quad (d\phi-\frac12H)\cdot\epsilon=0, \quad F^A\cdot\epsilon=0, \end{equation} where $\cdot$ means Clifford action of forms on spinors. The system of Killing spinor equations \eqref{sup1} together with the anomaly cancellation condition \eqref{acgen} is known as the \emph{% Strominger system} \cite{Str}. The last equation in \eqref{sup1} is the instanton condition which means that the curvature $F^A$ is contained in a Lie algebra of a Lie group which is a stabilizer of a non-trivial spinor. In dimension 7 the largest such a group is the exceptional group $G_2$ {which is the {automorphism } group of the unit imaginary octonions. Denoting by $\Theta$ the non-degenerate three-form definning the $G_2$ structure,} the $G_2$-instanton condition has the form \begin{equation} \label{in2} \sum_{k,l=1}^7(F^A)^i_j(e_k,e_l)\Theta(e_k,e_l,e_m)=0. \end{equation} Geometrically, the existence of a non-trivial real spinor parallel with respect to the metric connection $\nabla^+$ with totally skew-symmetric torsion $T=H$ leads to restriction of the holonomy group $Hol(\nabla^+)$ of the torsion connection $\nabla^+$. In dimension seven $Hol(\nabla^+)$ has to be contained in the exceptional group $G_2$ \cite{FI1,GKMW,GMW,FI2}. The general existence result \cite{GKMW,FI1,FI2} states that there exists a non-trivial solution to both dilatino and gravitino Killing spinor equations (the first two equations in \eqref{sup1}) in dimension d=7 if and only if there exists a globally conformal co-calibrated $G_2$-structure $% (\Theta,g)$ of pure type and the Lee form $\theta^7=-\frac{1}{3}*(* d\Theta\wedge\Theta) = \frac{1}{3}% *(* d*\Theta\wedge*\Theta)$ has to be exact, i.e. a $G_2$% -structure $(\Theta,g)$ satisfying the equations \begin{equation} \label{sol7} d*\Theta=\theta^7\wedge *\Theta, \quad d\Theta\wedge\Theta=0, \quad \theta^7=-2d\phi. \end{equation} Therefore, the torsion 3-form (the flux $H$) is given by \begin{equation}\label{torstr} H=T= -* d\Theta - 2*(d\phi\wedge\Theta). \end{equation} A geometric model which fits the above structures was proposed in \cite% {FIUVdim7-8} as a certain ${\mathbb{T}}^3$-bundle over a Calabi-Yau surface. For this, let $\Gamma_i$, $1\leq i \leq 3$, be three closed anti-self-dual $2$-forms on a Calabi-Yau surface $M^4$, which represent integral cohomology classes. Denote by $\omega_1$ and by $\omega_2+\sqrt{-1}\omega_3$ the (closed) K\"ahler form and the holomorphic volume form on $M^4$, respectively. Then, there is a compact 7-dimensional manifold $M^{1,1,1}$ which is the total space of a ${\mathbb{T}}^3$-bundle over $M^4$ and has a $G_2$-structure \Theta=\omega_1\wedge\eta_1+\omega_2\wedge\eta_2-\omega_3\wedge\eta_3+\eta_1% \wedge \eta_2\wedge\eta_3, solving the first two Killing spinor equations in \eqref{sup1} with constant dilaton in dimension $7$, where $\eta_i$, $1\leq i \leq 3$, is a $1$-form on $M^{1,1,1}$ such that $d\eta_i=\Gamma_i$, $1\leq i \leq 3$. For any smooth function $f$ on $M^4$, the $G_2$-structure on $M^{1,1,1}$ given by \begin{equation*} \Theta_f=e^{2f}\Big[\omega_1\wedge\eta_1+\omega_2\wedge\eta_2- \omega_3\wedge\eta_3\Big]+\eta_1\wedge\eta_2\wedge\eta_3 \end{equation*} solves the first two Killing spinor equations in \eqref{sup1} with non-constant dilaton $\phi=-2f$. To achieve a smooth solution to the Strominger system \emph{we still have to determine} an auxiliary vector bundle with an $G_2$-instanton in order to satisfy the anomaly cancellation condition \eqref{acgen}. \subsection{The quaternionic Heisenberg group} The Lie algebra $\mathfrak{g(\mathbb{H})}$ of the seven dimensional group $\boldsymbol {G\,(\mathbb{H})}$ has structure equations \begin{equation} \label{ecus-qHg} \begin{aligned} & d\gamma^1=d\gamma^2=d\gamma^3=d\gamma^4=0, \quad d\gamma^5=\gamma^{12}-\gamma^{34},\quad d\gamma^6=\gamma^{13}+\gamma^{24},\quad d\gamma^7=\gamma^{14}-\gamma^{23}. \end{aligned} \end{equation} where $\gamma_1,\dots,\gamma_7$ is a basis of left invariant 1-forms on $\boldsymbol {G\,(\mathbb{H})}$. In particular, the quaternionic Heisenebrg group $\boldsymbol {G\,(\mathbb{H})}$ in dimension seven is an $R^3$-bundle over the flat Calabi-Yau space $R^4$ and therefore fits the geometric model described above. In order to obtain results in dimensions less than seven through contractions of $\boldsymbol {G\,(\mathbb{H})}$ it will be convenient to consider the orbit of $G(\mathbb{H})$ under the natural action of $GL(3,\mathbb{R})$ on the $span\, \{\gamma^5, \gamma^6, \gamma^7\}$. Accordingly let $K_A$ be a seven-dimensional real Lie group with Lie bracket $[x,x^{\prime }]_A=A[A^{-1}x,A^{-1}x^{\prime }]$ for $A\in GL(3,\mathbb{R})$ defined by a basis of left-invariant 1-forms $\{e^1,\ldots,e^7\}$ such that $e^i=\gamma^i$ for $1 \leq i \leq 4$ and $(e^5\ e^6\ e^7)=A\, (\gamma^5\ \gamma^6\ \gamma^7)^T$. Hence, the structure equations of the Lie algebra $\mathfrak{K}_A$ of the group $K_A$ are \begin{equation} \label{ecus-general} d e^1=d e^2=d e^3=d e^4=0, \qquad d e^{4+i}= \sum_{j=1}^3a_{ij}\,\sigma_j, \quad i=1,2,3, \end{equation} where $\sigma_1=e^{12}-e^{34}$, $\sigma_2=e^{13}+e^{24}$, $% \sigma_3=e^{14}-e^{23}$ are the three anti-self-dual 2-forms on $\mathbb{R}^4$ and $A=\{a_{ij}\}$ is a 3 by 3 matrix. We will denote the norm of $A$ by $|A|$, $|A|^2=\sum_{i,j=1}^3 a_{ij}^2$. Since $\mathfrak{K}_A$ is isomorphic to $\mathfrak{g(\mathbb{H})}$, if $K_A$ is connected and simply connected it is isomorphic to $G(\mathbb{H})$. Furthermore, any lattice $\Gamma_A$ gives rise to a (compact) nilmanifold $% M_A=K_A/\Gamma_A$, which is a $\mathbb{T}^3$-bundle over a $\mathbb{T}^4$ with connection 1-forms of anti-self-dual curvature on the four torus. The three closed hyperK\"ahler 2-forms on $\mathbb R^4$ are given by $\omega_1=e^{12}+e^{34},\quad \omega_2=e^{13}-e^{24},\quad \omega_3=e^{14}+e^{23}.$ Following \cite{FIUVdim7-8}, for a smooth function $f$ on $\mathbb R^4$, we consider the $G_2$ structure on $K_A$ defined by the 3-form \begin{equation} \label{g2-general} \bar{\Theta}=e^{2f}\Big[\omega_1\wedge e^7+\omega_2\wedge e^5-\omega_3\wedge e^6\Big] + e^{567}{.} \end{equation} The corresponding metric $\bar{g}$ on $K_{A}$ has an orthonormal basis of 1-forms given by \begin{equation} \label{conf-general} \bar{e}^{1}=e^{f}\,e^{1},\quad \bar{e}^{2}=e^{f}\,e^{2},\quad \bar{e}% ^{3}=e^{f}\,e^{3},\quad \bar{e}^{4}=e^{f}\,e^{4},\quad \bar{e}% ^{5}=e^{5},\quad \bar{e}^{6}=e^{6},\quad \bar{e}^{7}=e^{7} \end{equation} with self-dual 2-forms $\bar\omega_i=e^{2f}\omega_i$ and anti-self-dual 2-forms $\bar\sigma_i=e^{2f}\sigma_i$, i=1,2,3. It is easy to check using \eqref{ecus-general} and the property $% \sigma_i\wedge\omega_j=0$ for $1\leq i,j \leq 3$ that the \eqref{sol7} is satisfied, i.e., the $G_2$ structure $\bar\Theta$ solves the gravitino and dilatino equations with non-constant dilaton $\phi=-2f$ \cite{FIUVas2}. Furthermore, with $f_{ij}=\frac{% \partial ^{2}f}{\partial x_{j}\partial x_{i}}$, $1\leq i,j\leq 4$, we obtain the next formula for the torsion $\bar T$ of $\bar\Theta$, see \cite{FIUVas2} for details, \begin{equation} \label{torsion-general} d\bar{T} =-e^{-4f}\left[ \triangle e^{2f}+2|A|^2\right] \,\bar{e}^{1234} =-% \left[ \triangle e^{2f}+2|A|^2 \right] e^{1234}, \end{equation} where $\triangle e^{2f}=(e^{2f})_{11}+(e^{2f})_{22}+(e^{2f})_{33}+(e^{2f})_{44}$ is the Laplacian on $\mathbb{R}^4$. \subsection{The first Pontrjagin form of the $(-)$-connection} \label{pon7-general} The connection 1-forms of a connection $\nabla$ are determined by $\nabla_Xe_j=\sum_{s=1}^7\omega^s_j(X)e_s$. From Koszul's formula, we have that the Levi-Civita connection 1-forms $% (\omega ^{\bar{g}})_{\bar{j}}^{\bar{\imath}}$ of the metric $\bar{g}$ are given by \begin{equation}\label{lc-general} \begin{array}{ll} (\omega ^{\bar{g}})_{\bar{j}}^{\bar{\imath}}(\bar{e}_{k}) \!&\! =-\frac{1}{2}\Big(% \bar{g}(\bar{e}_{i},[\bar{e}_{j},\bar{e}_{k}])-\bar{g}(\bar{e}_{k},[\bar{e}% _{i},\bar{e}_{j}])+\bar{g}(\bar{e}_{j},[\bar{e}_{k},\bar{e}_{i}])\Big) \\[8pt] \!&\! =\frac{1}{2}\Big(d\bar{e}^{i}(\bar{e}_{j},\bar{e}_{k})-d\bar{e}^{k}(% \bar{e}_{i},\bar{e}_{j})+d\bar{e}^{j}(\bar{e}_{k},\bar{e}_{i})\Big) \end{array} \end{equation} taking into account $\bar{g}(\bar{e}_{i},[\bar{e}_{j},\bar{e}_{k}])=-d\bar{e}% ^{i}(\bar{e}_{j},\bar{e}_{k})$. With the help of \eqref{lc-general} we compute the expressions for the connection 1-forms $(\omega ^{-})_{\bar{j}}^{% \bar{\imath}}$ of the connection $\nabla ^{-}$, \begin{equation} (\omega ^{-})_{\bar{j}}^{\bar{\imath}}=(\omega ^{\bar{g}})_{\bar{j}}^{\bar{% \imath}}-\frac{1}{2}(\bar{T})_{\bar{j}}^{\bar{\imath}},\qquad \text{ where }% \qquad (\bar{T})_{\bar{j}}^{\bar{\imath}}(\bar{e}_{k})=\bar{T}(\bar{e}_{i},% \bar{e}_{j},\bar{e}_{k}). \label{minus-general} \end{equation} A long straightforward calculation based on \eqref{lc-general}, \eqref{minus-general} yields that the first Pontrjagin form of $\nabla^{-}$ is a scalar multiple of $e^{1234}$ given by \cite{FIUVas2} \begin{equation} \label{p1-general} \pi ^{2}p_{1}(\nabla ^{-}) =\left[ \mathcal{F}_2[f]+\triangle_4 f -\frac{3}{8% } |A|^{2} \triangle e^{-2f} \right] {e}^{1234}, \end{equation} where $\mathcal{F}_2[f]$ is the 2-Hessian of $f$, i.e., the sum of all principle $2\times 2$-minors of the Hessian, and $\triangle_4 f=div(|\nabla f|^2\nabla f)$ is the 4-Laplacian of $f$. This formula shows, in particular, that even though the curvature 2-forms of $\nabla^-$ are quadratic in the gradient of the dilaton, the Pontrjagin form of $\nabla^-$ is also quadratic in these terms. Furthermore, if $f$ depends on two of the variables then $\mathcal{F}_2[f]=det (Hess f)$ while if $f$ is a function of one variable $\mathcal{F}_2[f]$ vanishes. What remains is to solve the anomaly cancellation condition. We use the $G_2$-instanton $\mathrm{D}_{\Lambda }$ defined in \cite{FIUVas2}, which depends on a 3 by 3 matrix $\Lambda=(% \lambda_{ij}) \in {\mathfrak{g}\mathfrak{l}}_3(\mathbb{R})$. It is shown in \cite{FIUVas2} that the connection $\mathrm{D}_{\Lambda}$ is a $G_2$-instanton with respect to the $G_2$ structure defined by \eqref{g2-general} which preserves the metric if and only if $\mathrm{rank}(\Lambda) \leq 1$. In this case, the first Pontrjagin form $p_{1}(\mathrm{D}_{\Lambda})$ of the $G_2$-instanton $% \mathrm{D}_{\Lambda}$ is given by \begin{equation} \label{abinst-general} 8\pi ^{2}p_{1}(\mathrm{D}_{\Lambda})= -4\lambda^2\,e^{1234}, \end{equation} where $\lambda=|\Lambda\, A|$ is the norm of the product matrix $\Lambda\, A$. After this preparation, we are left with solving the anomaly cancellation condition $d\bar{T}=\frac{\alpha ^{\prime }}{4}8\pi ^{2}\Big(p_{1}(\nabla ^{-})-p_{1}(D_\Lambda)\Big)$, which in general is a highly overdetermined system for the dilaton function $f$. Remarkably, in our case taking into account \eqref{torsion-general}, \eqref{p1-general} and % \eqref{abinst-general} the anomaly becomes \emph{the single} non-linear equation \begin{equation} \label{e:anomaly negative alpha} \triangle e^{2f}+2|A|^2 +\frac{\alpha ^{\prime }}{4}\left[ 8\mathcal{F}% _2[f]+8\triangle_4 f -3 |A|^{2} \triangle e^{-2f} +4\lambda^2\right]=0. \end{equation} We remind that this is an equation on $\mathbb{R}^4$ for the dilaton function $f$. \begin{rmrk} An important question interesting for both string theory and nonlinear analysis is whether the non-linear PDE \eqref{e:anomaly negative alpha} admits a periodic solution. \end{rmrk} In \cite{FIUVas2} was found a one dimensional (non-smooth) solution, which we describe briefly. If we assume that the function $f$ depends on one variable, $f=f(x^{1})$, and for a \emph{negative} $\alpha ^{\prime }$ we choose $2|A|^2+\alpha ^{\prime }\lambda ^{2}=0$, i.e., we let $\alpha ^{\prime }=-\alpha^2$ so that $2|A|^2=\alpha ^{2}\lambda ^{2}$. This simplifies \eqref{e:anomaly negative alpha} to the ordinary differential equation \begin{equation} \label{solv4} \left( e^{2f}\right) ^{\prime }+\frac34\alpha ^{2}|A|^2\left( e^{-2f}\right) ^{\prime }-2\alpha ^{2 }f^{\prime 3}=C_0=const. \end{equation}% A solution of the last equation for $C_0=0$ was found in \cite[Section 4.2]% {FIUVas}. The substitution $u=\alpha^{-2} e^{2f}$ allows us to write \eqref{solv4} in the form \begin{equation*} \left( e^{2f}\right) ^{\prime }+\frac34\alpha ^{2}|A|^2\left( e^{-2f}\right) ^{\prime }-2\alpha ^{2 }f^{\prime 3}=\frac{\alpha^2 u^{\prime }}{4u^{3}}% \left( 4u^{3}-3\frac {|A|^2}{\alpha ^{2 }}u-u^{\prime 2}\right) . \end{equation*}% For $C_0=0$ we solve the following ordinary differential equation for the function $u=u(x^1)>0$ \begin{equation} \label{solv5} u^{\prime 2}={4}u^{3}-3\frac {|A|^2}{\alpha ^{2 }}u=4u\left( u-d\right) \left( u+d\right) ,\qquad d=\sqrt{3|A|^2}/ \alpha. \end{equation}% Replacing the real derivative with the complex derivative leads to the Weierstrass' equation \left (\frac {d\, \mathcal{P}}{dz}\right )^2=4\mathcal{P}\left( \mathcal{P}% -d\right) \left( \mathcal{P}+d\right) for the doubly periodic Weierstrass $\mathcal{P}$ function with a pole at the origin. Letting $\tau_{\pm}$ be the basic half-periods such that $\tau _{+}$ is real and $\tau _{-}$ is purely imaginary we have that $% \mathcal{P}$ is real valued on the lines $\mathfrak{R}\mathfrak{e}\,z=m\tau _{+}$ or $\mathfrak{I}\mathfrak{m}\,z=im\tau _{-}$, $m\in \mathbb{Z}$. Thus, $u(x^1)=\mathcal{P}(x^1)$ defines a non-negative $2\tau_{+}$-periodic function with singularities at the points $2n \tau_{+}$, $n\in \mathbb{Z}$, which solves the real equation \eqref{solv5}. By construction, $f= \frac 12 \ln (\alpha^2 u)$ is a periodic function with singularities on the real line which is a solution to equation % \eqref{e:anomaly negative alpha}. {\ Therefore the $G_2$ structure defined by $\bar \Theta$ descends to the $7$% -dimensional nilmanifold $M^{7}=\Gamma \backslash K_{A}$ with singularity, determined by the singularity of $u$, where $K_{A}$ is the 2-step nilpotent Lie group with Lie algebra $\mathfrak{K}_{A}$, defined by % \eqref{ecus-general}, and $\Gamma $ is a lattice with the same period as $f$% , i.e., $2 \tau_{+}$ in all variables. In fact, $M^7$ is the total space of a $\mathbb{T}^3$ bundle over the asymptotically hyperbolic manifold $M^4$ which is a conformally compact 4-torus with conformal boundary at infinity a flat 3-torus. Thus, we obtain the complete solution to the Strominger system in dimension seven with non-constant dilaton, non-trivial instanton and flux and with a negative $\alpha ^{\prime }$ parameter found in \cite{FIUVas2}. \subsection{Solutions through contractions} A contraction of the quaternionic heisenberg algebra can be obtained considering the matrix \begin{equation*} A_\varepsilon\overset{def}{=}\left(\!\!\! \begin{array}{ccc} 0 & b & 0 \\ a & 0 & -b \\ 0 & 0 & \varepsilon% \end{array} \!\right). \end{equation*} Letting $\varepsilon\rightarrow 0$ into $A_{\varepsilon}$ we get in the limit, using \eqref{ecus-general}, the structure equations of a six dimensional two step nilpotent Lie algebra known as $\mathfrak{h}_5$. On the corresponding simply connected two-step nilponent Lie group $H_5$ non-trivial solutions to the Strominger system in dimension 6 were prsented in\cite{FIUVas}. It is a remarkable fact \cite{FIUVas2} that the geometric structures, the partial differential equations and their solutions found in dimension seven starting with the quaternionic Heisenbeg group as above converge trough contraction to the heterotic solutions on 6-dimensional non-K\"ahler space on $H_5$ found in \cite{FIUVas}. Moreover, using suiatable contractraction it is possible to obtain non-trivial solutions to the Strominger system in dimension 5 as well (see \cite{FIUVas2} for details). \textbf{Acknowledgments}:S.Ivanov is visiting University of Pennsylvania, Philadelphia. S.I. thanks UPenn for providing the support and an excellent research environment during the final stage of the paper. S.I. is partially supported by Contract DFNI I02/4/12.12.2014 and Contract 168/2014 with the Sofia University "St.Kl.Ohridski". D.V. was partially supported by Simons Foundation grant \#279381. D.V. would like to thank the organizers of the Workshop on Geometric Analysis on sub-Riemannian manifolds at IHP for the stimulating atmosphere and talks during the workshop which influenced and inspired the writing of this paper.
{ "timestamp": "2015-04-14T02:16:45", "yymm": "1504", "arxiv_id": "1504.03259", "language": "en", "url": "https://arxiv.org/abs/1504.03259" }
\section{Introduction} There is growing evidence for the existence of non-ordinary hadrons that do not follow the quark model, {\it i.e.} the quark-antiquark-meson or three-quark-baryon classification. Meson Regge trajectories relate resonance spins $J$ to the square of their masses and for ordinary mesons they are approximately linear. The functional form of a Regge trajectory depends on the underlying dynamics and, for example, the linear trajectory for mesons is consistent with the quark model as it can be explained in terms of a rotating relativistic flux tube that connects the quark with the antiquark. Regge trajectories associated with non-ordinary mesons do not, however, have to be linear. The non-ordinary nature of the lightest scalar meson, the $f_0(500)$ also referred to as the $\sigma$, together with a few other scalars, has been postulated long ago \cite{Jaffe:1976ig}. In the context of the Regge classification, in a recent study of the meson spectrum in \cite{Anisovich:2000kxa} it was concluded that the $\sigma$ meson does not belong to the same set of trajectories that many ordinary mesons do. In \cite{Masjuan:2012gc}, it was concluded that the $\sigma$ can be omitted from the fits to linear $(J,M^2)$ trajectories because of its large width. The reason is that its width was taken as measure of the uncertainty on its mass and it was found that, when fitting trajectory parameters, its contribution to the overall $\chi^2$ was insignificant. In a recent work \cite{Londergan:2013dza} we developed a formalism based on dispersion relations that, instead of fitting a specific, {\it e.g.} linear, form to spins and masses of various resonances, enables us to calculate the trajectory using as input the position and the residue of a complex resonance pole in a scattering amplitude. When the method was applied to the $\rho(770)$ resonance, which appears as a pole in the elastic $P$-wave $\pi\pi$ scattering, the resulting trajectory was found to be, to a good approximation, linear. The resulting slope and intercept are in a good agreement with phenomenological Regge fits. The slope, which is slightly less than 1 GeV$^{-2}$, is expected to be universal for all ordinary trajectories. It is worth noting that in this approach the resonance width is, as it should be, related to the imaginary part of the trajectory and not a source of an uncertainty. The $\sigma$ meson also appears as a pole in the $\pi\pi$ $S$-wave scattering. The position and residue of the pole has recently been accurately determined in \cite{Caprini:2005zr} using rigorous dispersive formalisms. \color{black} When the same method was applied to the $\sigma$ meson, however, we found quite a different trajectory. It has a significantly larger imaginary part and the slope parameter, computed at the physical mass as a derivative of the spin with respect to the mass squared, is more than one order of magnitude smaller than the universal slope. The trajectory is far from linear, instead it is qualitatively similar to a trajectory of a Yukawa potential. We also note that deviation from linearity is not necessarily implied by the large width of the $\sigma$ since it was also shown in \cite{Londergan:2013dza} that resonances with large widths may belong to linear trajectories. Our findings give further support for the non-ordinary nature of the $\sigma$. Still, one may wonder if the single case of the $\rho$ meson, where the method agrees with Regge phenomenology, gives sufficient evidence that it can distinguish between ordinary and non-ordinary mesons. In this letter, therefore we show that other ordinary trajectories can be predicted with the same technique, as long as the underlying resonances are almost elastic. For this purpose, we have concentrated on resonances that decay nearly $100\%$ to two mesons. In addition to the $\rho$ there are two other well-known examples: the $f_2(1270)$, whose branching ratio to $\pi\pi$ is $84.8^{+2.4}_{-1.2}\%$, and the $f_2'(1525)$, with branching ratio to $K\bar K$ of $(88.7\pm2.2)\%$. These resonances are well established in the quark model and as we show below, Regge trajectories predicted by our method come out almost real and linear with a slope close to the universal one. There is an additional check on the method that we perform here. Since the formalism used in the case of the $\rho$ was based on a twice-subtracted dispersion relation, the trajectory had a linear term plus a dispersive integral over the imaginary part. Since the imaginary part of the trajectory is closely related to the decay width, one might wonder if the $\rho(770)$, $f_2(1270)$ and $f_2'(1525)$ trajectories come out straight just because their widths are small. In other words, that for narrow resonances, the straight line behavior is not predicted but it is already built in through subtractions. For this reason, in this work, we also consider three subtractions and show that for the ordinary resonances under study the quadratic term is negligible. The paper is organized as follows. In the next section we briefly review the dispersive method and in Sect.\ref{sec:numerical results} we present the numerical results. In Sect.\ref{sec:3subs} we discuss results of the calculation with three subtractions. Summary and outlook are given in Sect.\ref{conclusions}. \section{Dispersive determination of a Regge trajectory from a single pole} The partial wave expansion of the elastic scattering amplitude, $T(s,t)$, of two spinless mesons of mass $m$ is given by \begin{equation} T(s,t)=32 K \pi \sum_l (2l+1) t_l(s) P_l(z_s(t)), \label{fullamp} \end{equation} where $z_s(t)$ is the s-channel scattering angle and $K=1,2$ depending on whether the two mesons are distinguishable or not. The partial waves $t_l(s)$ are normalized according to \begin{equation} t_l(s) = e^{i\delta_l(s)}\sin{\delta_l(s)}/\rho(s), \quad \rho(s) = \sqrt{1-4m^2/s}, \end{equation} where $\delta_l(s)$ is the phase shift. The unitarity condition on the real axis in the elastic region, \begin{equation} \mbox{Im}t_l(s)=\rho(s)|t_l(s)|^2, \label{pwunit} \end{equation} is automatically satisfied. When $t_l(s)$ is continued from the real axis to the entire complex plane, unitarity determines the amplitude discontinuity across the cut on the real axis above $s=4m^2$. It also determines the continuation in $s$, at fixed $l$, onto the second sheet where resonance poles are located. It follows from Regge theory that the same resonance poles appear when the amplitude is continued into the complex $l$-plane \cite{Reggeintro}, leading to \begin{equation} t_l(s) = \frac{\,\beta(s)}{l-\alpha(s)\,} + f(l,s), \label{Reggeliket} \end{equation} where $f(l,s)$ is analytical near $l=\alpha(s)$. The Regge trajectory $\alpha(s)$ and residue $\beta(s)$ satisfy $\alpha(s^*)=\alpha^*(s)$, $\beta(s^*)=\beta^*(s)$, in the complex-$s$ plane cut along the real axis for $s > 4m^2$. Thus, as long as the pole dominates in Eq.\eqref{Reggeliket}, partial wave unitarity, Eq.\eqref{pwunit}, analytically continued to complex $l$ implies, \begin{equation} \mbox{Im}\,\alpha(s) = \rho(s) \beta(s), \label{unit} \end{equation} and determines the analytic continuation of $\alpha(s)$ to the complex plane \cite{Chu:1969ga}. At threshold, partial waves behave as $t_l(s) \propto q^{2l}$, where $q^2=s/4-m^2$, so that if the Regge pole dominates the amplitude, we must have $\beta(s) \propto q^{2\alpha(s)}$. Moreover, following Eq.\eqref{fullamp}, the Regge pole contribution to the full amplitude is proportional to $(2\alpha + 1) P_\alpha(z_s)$, so that in order to cancel poles of the Legendre function $P_\alpha(z_s)\propto\Gamma(\alpha + 1/2)$ the residue has to vanish when $\alpha + 3/2$ is a negative integer, {\it i.e.}, \begin{equation} \beta(s) = \gamma(s) \hat s^{\alpha(s)} /\Gamma(\alpha(s) + 3/2). \label{reduced} \end{equation} Here we defined $\hat s =( s-4m^2)/s_0$ and introduced a scale $s_0$ to have the right dimensions. The so-called reduced residue, $\gamma(s)$, is a real analytic function. Hence, on the real axis above threshold, since $\beta(s)$ is real, the phase of $\gamma$ is \begin{equation} \mbox{arg}\,\gamma(s) = - \mbox{Im}\alpha(s) \log(\hat s) + \arg \Gamma(\alpha(s) + 3/2). \end{equation} Consequently, we can write for $\gamma(s)$ a dispersion relation: \begin{equation} \gamma(s) = P(s) \exp\left(c_0 + c' s + \frac{s}{\pi} \int_{4m^2}^\infty \!\!\!\!ds' \frac{\mbox{arg}\,\gamma(s')}{s' (s' - s)} \right), \label{g} \end{equation} where $P(s)$ is an entire function. Note that the behavior at large $s$ cannot be determined from first principles, but, as we expect linear Regge trajectories for ordinary mesons, we should allow $\alpha$ to behave as a first order polynomial at large-$s$. This implies that $\mbox{Im}\alpha(s)$ decreases with growing $s$ and thus it obeys the dispersion relation~\cite{Reggeintro,Collins-PLB}: \begin{equation} \alpha(s) = \alpha_0 + \alpha' s + \frac{s}{\pi} \int_{4m^2}^\infty ds' \frac{ \mbox{Im}\alpha(s')}{s' (s' -s)}. \label{alphadisp} \end{equation} Assuming $\alpha' \ne 0$, from unitarity, Eq.\eqref{unit}, in order to match the asymptotic behavior of $\beta(s)$ and $\mbox{Im}\alpha(s)$ it is required that $c' = \alpha' ( \log(\alpha' s_0) - 1)$ and that $P(s)$ can at most be a constant, $P(s) = \mbox{const}$. Therefore, using Eq.\eqref{Reggeliket}, we arrive at the following three equations, which define the ``constrained Regge-pole'' amplitude~\cite{Chu:1969ga}: \begin{align} \mbox{Re} \,\alpha(s) & = \alpha_0 + \alpha' s + \frac{s}{\pi} PV \int_{4m^2}^\infty ds' \frac{ \mbox{Im}\alpha(s')}{s' (s' -s)}, \label{iteration1}\\ \mbox{Im}\,\alpha(s)&= \frac{ \rho(s) b_0 \hat s^{\alpha_0 + \alpha' s} }{|\Gamma(\alpha(s) + \frac{3}{2})|} \exp\Bigg( - \alpha' s[1-\log(\alpha' s_0)] + \!\frac{s}{\pi} PV\!\!\!\int_{4m^2}^\infty\!\!\!\!\!\!\!ds' \frac{ \mbox{Im}\alpha(s') \log\frac{\hat s}{\hat s'} + \mbox{arg }\Gamma\left(\alpha(s')+\frac{3}{2}\right)}{s' (s' - s)} \Bigg), \label{iteration2}\\ \beta(s) &= \frac{ b_0\hat s^{\alpha_0 + \alpha' s}}{\Gamma(\alpha(s) + \frac{3}{2})} \exp\Bigg( -\alpha' s[1-\log(\alpha' s_0)] + \frac{s}{\pi} \int_{4m^2}^\infty \!\!\!\!\!\!\!ds' \frac{ \mbox{Im}\alpha(s') \log\frac{\hat s}{\hat s'} + \mbox{arg }\Gamma\left(\alpha(s')+\frac{3}{2}\right)}{s' (s' - s)} \Bigg), \label{betafromalpha} \end{align} where $PV$ denotes the principal value. For real $s$, the last two equations reduce to Eq.\eqref{unit}. The three equations are solved numerically with the free parameters fixed by demanding that the pole on the second sheet of the amplitude in Eq.~(\ref{Reggeliket}) is at a given location. Thus we will be able to obtain the two independent trajectories corresponding to the $f_2(1270)$ and $f_2'(1525)$ resonances from their respective pole parameters. Note that we are not imposing, but just allowing, linear trajectories. \section{Numerical Results} \label{sec:numerical results} In principle, the method described in the previous section is suitable for resonances that appear in the elastic scattering amplitude, {\it i.e.} they only decay to one two-body channel. For simplicity we are also focusing on cases were the two mesons in the scattering state have the same mass. We assume that both the $f_2(1270)$ and the $f_2'(1525)$ resonances can be treated as purely elastic and we will use their decay fractions into channels other than $\pi\pi$ and $K\bar K$, respectively, as an additional systematic uncertainty in their widths and couplings. In our numerical analysis we fit the pole, $s_p$, and residue, $|g^2|$, found in the second Riemann sheet of the Regge amplitude. In this amplitude the $\alpha(s)$ and $\beta(s)$ are constrained to satisfy the dispersion relations in Eqs.\eqref{iteration2} and \eqref{betafromalpha}. Thus, the fit determines the parameters $\alpha_0, \alpha',b_0$ for the trajectory of each resonance. In practice, we minimize the sum of squared differences between the input and output values for the real and imaginary parts of the pole position and for the absolute value of the squared coupling, divided by the square of the corresponding uncertainties. At each step in the minimization procedure a set of $\alpha_0, \alpha'$ and $b_0$ parameters is chosen and the system of Eqs.\eqref{iteration1} and \eqref{iteration2} is solved iteratively. The resulting Regge amplitude for each $\alpha_0, \alpha'$ and $b_0$ is then continued to the complex plane, in order to determine the resonance pole in the second Riemann sheet, and the $\chi^2$ is calculated by comparing this pole to the corresponding input. \subsection{$f_2(1270)$ resonance} \label{subsec:f2(1270)} In the case of the $f_2(1270)$ resonance, we use as input the pole obtained from the conformal parameterization of the D0 wave from Ref~\cite{GarciaMartin:2011cn}. In that work the authors use different parameterizations in different regions of energy and impose a matching condition. Here we will use the parameterization valid in the region where the resonance dominates the scattering amplitude, namely, in the interval $2m_K\le s^{1/2}\le 1420$ MeV. Moreover, we will decrease the width down to $85\%$ of the value found in \cite{GarciaMartin:2011cn} to account for the inelastic channels. The conformal parameterization results in the pole located at $$\sqrt{s_{f_2}}=M-i\Gamma/2=1267.3^{+0.8}_{-0.9}-i(87\pm9)\text{ MeV}$$ and a coupling of $$|g_{f_2\pi\pi}|^2=25\pm3\text{ GeV}^{-2}.$$ With these input parameters, we follow the minimization procedure as explained above, until we get a Regge pole at $\sqrt{s_{f_2}}=(1267.3\pm0.9)-i(89\pm10)\text{ MeV}$ and coupling $|g_{f_2\pi\pi}|^2=25\pm3\text{ GeV}^{-2}$. In Fig.~\ref{Fig:ampl_f2} we show the corresponding constrained Regge-pole amplitude on the real axis versus the conformal parameterization that was constrained by the data \cite{GarciaMartin:2011cn}. This comparison is a check that our Regge-pole amplitude, which neglects the background $f(l,s)$ term in Eq.\eqref{Reggeliket}, describes well the amplitude in the pole region, namely for $(M-\Gamma/2)^2<s<(M+\Gamma/2)^2$. The grey bands cover the uncertainties arising from the errors of the input and include an additional $15\%$ systematic uncertainty in the width as explained above. Taking into account that only parameters of the pole have been fitted, but not the whole amplitude in the real axis, and that we have completely neglected the background in Eq.~(\ref{Reggeliket}), the agreement between the two amplitude models is very good, particularly in the resonance region. Of course, the agreement deteriorates as we move away from the peak region as illustrated by the shadowed energy regions $s<(M-\Gamma/2)^2$ and $s>(M+\Gamma/2)^2$. \begin{figure} \hspace*{-.6cm} \includegraphics[scale=0.9,angle=-90]{amplitud-f2-both-band-abs.eps} \caption{\rm \label{Fig:ampl_f2} The solid line represents the absolute value of the constrained Regge-pole amplitude for the $f_2(1270)$ resonance. The gray bands cover the uncertainties due to the errors in the input pole parameters. The dashed line corresponds to the absolute value of the data fit obtained in \cite{GarciaMartin:2011cn}. Let us recall that only the parameters of the pole given by this parameterization have been used as input, and not the amplitude itself. The regions covered with a mesh correspond to $s<(M-\Gamma/2)^2$ and $s>(M+\Gamma/2)^2$, where the background might not be negligible anymore. } \end{figure} Since our constrained Regge amplitude provides a good description of the resonance region we can trust the resulting Regge trajectory. The parameters of the trajectory obtained through our minimization procedure are as follows, \begin{equation} \alpha_0=0.9^{+0.2}_{-0.3}\,;\hspace{3mm} \alpha'=0.7^{+0.3}_{-0.2} \text{ GeV}^{-2};\hspace{3mm}b_0=1.3^{+1.4}_{-0.8}\,.\label{eq:paramf2} \end{equation} In Fig.~\ref{Fig:alpha_f2} we show the real and imaginary parts of $\alpha(s)$, with solid and dashed lines, respectively. Again, the gray bands cover the uncertainties coming from the errors in the input pole parameters. We find that the real part of the trajectory is almost linear and much bigger than the imaginary part. It is as expected for Regge trajectories of ordinary mesons. For comparison, we also show, with a dotted line, the Regge trajectory obtained in~\cite{Anisovich:2000kxa} by fitting a Regge linear trajectory to the meson states associated with $f_2(1270)$, which is traditionally referred to as the $P'$ trajectory. We see that the two trajectories are in good agreement. Indeed, our parameters are compatible, within errors, with those in~\cite{Anisovich:2000kxa}: $\alpha_{P'}\approx0.71$ and $\alpha'_{P'}\approx0.83\text{ GeV}^{-2}$. We also include in Fig.~\ref{Fig:alpha_f2} the resonances from the PDG~\cite{PDG} listing that could be associated with this trajectory. In Fig.~\ref{Fig:alpha_f2} the trajectory has been extrapolated to high energies, where the elastic approximation does not hold any more and we cannot hope to give a precise prediction for its behavior. The only reason to do this is to show the position of the candidate states connected to the $f_2(1270)$. In the figure, this region is covered with a mesh to the right from the line at the $s$ that corresponds to the resonance mass plus three half-widths. Of course, we cannot confirm which of these resonances belongs to the $f_2(1270)$ trajectory, but we observe that the $J=4$ resonance could be the $f_4(2050)$, as proposed in~\cite{Anisovich:2000kxa}, or the $f_J(2220)$ \footnote{This resonance still ``needs confirmation'' and it is not yet known whether its spin is 2 or 4 \cite{PDG}.} or even the $f_4(2300)$. All these resonances appear in the PDG, but are omitted from the summary tables. \begin{figure} \hspace*{-.6cm} \includegraphics[scale=0.9,angle=-90]{alpha-f2.eps} \caption{\rm \label{Fig:alpha_f2} Real (solid) and imaginary (dashed) parts of the $f_2(1270)$ Regge trajectory. The gray bands cover the uncertainties due to the errors in the input pole parameters. The area covered with a mesh is the mass region starting three half-widths above the resonance mass, where our elastic approach should be considered only as a mere extrapolation. For comparison, we show with a dotted line the $f_2(1270)$ Regge trajectory obtained in~\cite{Anisovich:2000kxa}, traditionally called the $P'$ trajectory. We also show the resonances listed in the PDG that are candidates for this trajectory. Note that their average mass does not always coincide with the nominal one, as is the case for the $f_2(1270)$.} \end{figure} \subsection{$f_2'(1525)$ resonance} \label{subsec:f2'(1525)} As commented above, the $f_2'(1525)$ decays mainly to two kaons. Although there is no scattering data on the $l=2$ elastic $\bar KK$ phase shift in this mass region, the mass and width of the $f_2'(1525)$ are given in the PDG~\cite{PDG}. Thus we use $M_{f_2'}=1525\pm5$ MeV and $\Gamma^{KK}_{f_2'}=69^{+10}_{-9}$ MeV, where the central value of this width corresponds to the decay into $\bar KK$ only. Now, we infer the scattering pole parameters assuming the $f_2'(1525)$ is well described by an elastic Breit-Wigner shape, so that we take the pole to be at $s_{f_2'}=(M_{f_2'}-i \Gamma_{f_2'}/2)^2$ and the residue to be ${\rm Res}=-M_{f_2'}^2\Gamma_{f_2'}^{KK}/2p$, where $p$ is the CM momenta of the two kaons. Since $\vert g \vert^2=-16 \pi (2l+1)\,{\rm Res}/(2p)^{2l}$, we find $|g_{f_2'KK}|^2=19\pm 3 \text{ GeV}^{-2}$. With these input parameters we solve the dispersion relations using the same minimization method and obtain the following Regge pole parameters: $\sqrt{s_{f_2'}}=(1525\pm5 )-i(34^{+4}_{-5})\text{ MeV}$ and $|g_{f_2'KK}|^2=19\pm3\text{ GeV}^{-2}$. Since we lack experimental data to compare the amplitudes, we proceed to examining the trajectory. The parameters that we obtain are, \begin{equation} \alpha_0=0.53^{+0.10}_{-0.44}\,;\hspace{3mm} \alpha'=0.63^{+0.20}_{-0.05} \text{ GeV}^{-2};\hspace{3mm}b_0=1.33^{+0.63}_{-0.09}\,,\label{eq:paramsf2p} \end{equation} which give the Regge trajectory shown in Fig.~\ref{Fig:alpha_ff2}. Again, we find the real part nearly linear and much larger than the imaginary part. As in the case of the $f_2(1270)$, the slope is compatible with that found for the $P'$ trajectory in~\cite{Anisovich:2000kxa} $\alpha'_{P'}\approx0.83\text{ GeV}^{-2}$, and the intercepts also agree. As we did for $f_2(1270)$, we include in Fig.~\ref{Fig:alpha_ff2} the $J=4$ candidates for the $f_2'(1525)$ trajectory. These are the $f_J(2220)$ and the $f_4(2300)$. We remark that there is no experimental evidence of the $f_4(2150)$ that was predicted in~\cite{Anisovich:2000kxa} from their analysis of the $f_2'(1525)$ trajectory. As commented before, these resonances lie in a region, covered with a mesh in Fig.~\ref{Fig:alpha_ff2}, beyond the strict applicability limit of our approach, where our results must be considered qualitatively at most. Finally, we remark that the PDG list includes another $f_2$ resonance, albeit requiring confirmation. It has a mass between that of the $f_2(1270)$ and the $f_2'(1525)$ and it could also have either the $f_J(2220)$ or $f_4(2300)$ as the higher mass partner. \begin{figure} \hspace*{-.6cm} \includegraphics[scale=0.9,angle=-90]{alpha-ff2.eps} \caption{\rm \label{Fig:alpha_ff2} Real (solid) and imaginary (dashed) parts of the $f_2'(1525)$ Regge trajectory. The gray bands cover the uncertainties due to the errors in the input pole parameters and the area covered with a mesh is the mass region starting three half-widths above the resonance mass, where our elastic approach must be considered just as an extrapolation. For comparison, we show with a dotted line the Regge trajectory obtained in~\cite{Anisovich:2000kxa} and the resonances listed in the PDG that could belong to this trajectory.} \end{figure} \section{Dispersion relation with three subtractions} \label{sec:3subs} As already mentioned in the introduction, one may wonder whether the linearity of the trajectories we obtain for the two D-wave resonances, as well as for the $\rho(770)$ in~\cite{Londergan:2013dza}, is related to use of two subtractions in the dispersion relation for $\alpha(s)$. In particular, since the resonances are rather narrow one could expect the imaginary part of their trajectories to be small, so that if the last term in Eq.~\eqref{alphadisp} was dropped the trajectory would be reduced to a straight line. Thus, in order to show that the linearity of the trajectory is not forced by the particular parameterization, we repeated the calculations using three subtractions in the dispersion relations, \begin{align} \mbox{Re} \,\alpha(s) & = \alpha_0 + \alpha' s + \alpha'' s^2 + \frac{s^2}{\pi} PV \int_{4m^2}^\infty ds' \frac{ \mbox{Im}\alpha(s')}{s'^2 (s' -s)}, \label{iteration1-3sub}\\ \mbox{Im}\,\alpha(s)&= \frac{ \rho(s) b_0 \hat s^{\alpha_0 + \alpha' s+\alpha'' s^2} }{|\sqrt{\Gamma(\alpha(s) + \frac{3}{2})}|} \exp\Bigg( - \frac{1}{2}[1-\log(\alpha'' s_0^2)] s (R+\alpha'' s)-Q s\nonumber\\ & \hspace{4.4cm} + \!\frac{s^2}{\pi} PV\!\!\!\int_{4m^2}^\infty ds' \frac{ \mbox{Im}\alpha(s') \log\frac{\hat s}{\hat s'} + \frac{1}{2}\mbox{arg }\Gamma\left(\alpha(s')+\frac{3}{2}\right)}{s'^2 (s' - s)} \Bigg), \label{iteration2-3sub} \end{align} with \begin{equation} R=B-\frac1{\pi}\int_{4m^2}^\infty ds' \frac{ \mbox{Im}\alpha(s')}{s'^2}, \end{equation} and \begin{equation} Q=-\frac1{\pi}\int_{4m^2}^\infty ds' \frac{ -\mbox{Im}\alpha(s')\log{\hat s'}+\frac{1}{2}\mbox{arg }\Gamma\left(\alpha(s')+\frac{3}{2}\right)}{s'^2}. \end{equation} The reason why the constants $R$ and $Q$ and the square root of $\Gamma$ have been introduced is to ensure that, at large $s$, $\mbox{Im}\,\alpha(s)$ behaves as $1/s$. The parameters that we obtain for the trajectories with these dispersion relations are shown in Table~\ref{Tab:3sub}. \renewcommand{\arraystretch}{1.3} \begin{table} \centering \caption{Parameters of the $f_2(1270)$, $f_2'(1525)$ and $\rho(770)$ Regge trajectories using three-time-subtracted dispersion relations.} \label{Tab:3sub} \begin{tabular*}{0.7\textwidth}{@{\extracolsep{\fill} }ccccc}\hline & $\alpha_0$ & $\alpha'$ (GeV$^{-2}$) & $\alpha''$ (GeV$^{-4}$) & \hspace{5mm}$b_0$\hspace{5mm} \\\hline $f_2(1270)$ & 1.01 & 0.97 & 0.04 & 2.13\\ $f_2'(1525)$ & 0.42 & 0.65 & 0.02 & 4.58 \\ $\rho(770)$ & 0.56 & 1.11 & 0.03 & 0.88 \\ \hline \end{tabular*} \end{table} With the above parameterization we obtain for the fitted pole parameters $\sqrt{s_{f_2}}=1267.3-i 90 \text{ MeV}$, $|g_{f_2\pi\pi}|^2=25\text{ GeV}^{-2}$, $\sqrt{s_{f_2'}}=1525-i35\text{ MeV}$, $|g_{f_2'\pi\pi}|^2=19\text{ GeV}^{-2}$, $\sqrt{s_{\rho}}=763-i 74\text{ MeV}$ and $|g_{\rho\pi\pi}|^2=35\text{ GeV}^{-2}$. Therefore, despite having four parameters to fit three numbers, we find no real improvement in the description of the poles. In the case of three subtractions, neglecting the imaginary part of the resonances results in a quadratic trajectory. Therefore, in Fig.~\ref{Fig:3sub} we compare the trajectories using the three (solid line) and the two (dashed line) subtractions in the dispersion relations. We observe that in both cases these is a curvature, but that in the elastic region the trajectories are almost linear. The difference between the two methods only becomes apparent for masses well above the range of applicability. Moreover, the difference between the results obtained using two and three subtractions can be used as an indicator of the stability of our results and therefore confirms that the applicability range for our method is well estimated and ends as soon as the inelasticity in the wave becomes sizable. \begin{figure} \begin{tabular}{ccc} \includegraphics[scale=0.6,angle=-90]{alpha-f2-3sub.eps}& \hspace{-3mm}\includegraphics[scale=0.6,angle=-90]{alpha-ff2-3sub.eps}& \hspace{-3mm}\includegraphics[scale=0.6,angle=-90]{alpha-rho-3sub.eps} \end{tabular} \caption{\rm \label{Fig:3sub} Regge trajectories obtained using three-time-subtracted dispersion relations (solid lines) compared to the ones obtained with twice-subtracted dispersion relations (dashed lines with gray error bands).} \end{figure} \section{Discussion, conclusions and outlook} \label{conclusions} In~\cite{Londergan:2013dza} a dispersive method was developed to calculate Regge trajectories of resonances that appear in the elastic scattering of two mesons. We showed how, using the associated scattering pole of the resonance it is possible to determine whether its trajectory is of a standard type, {\it i.e.} real and linear as followed by ``ordinary'' $\bar qq$-mesons, or not. This method thus provides a possible benchmark for identifying non-ordinary mesons. In particular the ordinary Regge trajectory of the $\rho(770)$, which is a well-established $\bar qq$ state, was successfully predicted, whereas the $\sigma$ meson, a long-term candidate for a non-ordinary meson, was found to follow a completely different trajectory. In the first part of this work we have successfully predicted the trajectories of other two, well-established ordinary resonances, the $f_2(1270)$ and $f_2'(1525)$. In particular, from parameters of the associated poles in the complex energy plane we have calculated their trajectories and have shown that they are almost real and very close to a straight line, as expected. In the second part of this work we have addressed the question of whether choosing two subtractions in the dispersion relations of \cite{Londergan:2013dza} was actually imposing that the real part of the trajectory is a straight line for relatively narrow resonances. To address this question we analyzed the same resonances using a dispersion relation with an additional subtraction. We have shown that within the range of applicability of our approach, which basically coincides with the elastic regime, the resulting trajectories are once again very close to a straight line. In the future it will be interesting to use such dispersive methods to determine trajectories of other mesons {\it e.g} the $K^*(892)$ as well as the controversial ``partner" the scalar $K^*(800)$, which is another long-time candidate for a non-ordinary meson. Heavy mesons in charm and beauty sectors can also be examined. We also plan to extend the method to meson-baryon scattering, where, for example, the $\Delta(1232)$ is another candidate for an ordinary resonance. We are also extending the approach to coupled channels. {\bf Acknowledgments} We would like to thank M.R. Pennington for several discussions. JRP and JN are supported by the Spanish project FPA2011-27853-C02-02. JN acknowledges funding by the Fundaci\'on Ram\'on Areces. APS work is supported in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177 and DE-FG0287ER40365. APS\ is supported in part by the U.S.\ Department of Energy under Grant DE-FG0287ER40365. \vspace*{-.2cm}
{ "timestamp": "2015-04-14T02:16:23", "yymm": "1504", "arxiv_id": "1504.03248", "language": "en", "url": "https://arxiv.org/abs/1504.03248" }
{ "timestamp": "2015-04-14T02:11:46", "yymm": "1504", "arxiv_id": "1504.03082", "language": "en", "url": "https://arxiv.org/abs/1504.03082" }
\section{Introduction} Averaging data is a common procedure for noise reduction in all quantitative sciences. One measures the noisy quantity $N$ times, and then calculates the mean value of the $N$ samples. Assuming that the useful signal part is the same for each run of the experiment, the random noise part averages out and leads to an improvement by a factor $\sqrt{N}$ of the signal-to-noise ratio (SNR). Instead of measuring the same sample $N$ times, one may of course also measure $N$ identically prepared samples in parallel, in which case we will think of them as ``probes''. A lot of excitement has been generated by the realization that in principle one may improve upon the $\sqrt{N}$ factor by probes that are not independent, but in an entangled state. It was shown \cite{Giovannetti04} that with such `` quantum enhanced measurements'' the SNR can be improved by up to a factor $N$. Unfortunately, on the experimental side, decoherence issues have limited the quantum enhancement to very small values of $N$ \cite{Leibfried05,Nagata07,Higgins07}. For practical purposes it is therefore often more advantageous to stay with a classical protocol and increase $N$ \cite{Pinel12}. Since the decoherence problem is very difficult to solve, one should think about alternative ways of increasing the SNR through the use of quantum effects. One such idea is ``coherent averaging''. The original scheme, first introduced in \cite{Braun10.2,Braun11} and named as such in \cite{braun_coherently_2014}, works in the following way: instead of measuring the $N$ probes individually, one lets them interact coherently with a $N+1$st system (a `` quantum bus'') and then reads out the latter. In this way, quantum mechanical phase information from the $N$ probes can accumulate in the quantum bus, and this can improve the SNR also by a factor $N$, even when using an initial product state (see Fig.\ref{fig:cohav}). A physical example considered in detail was the coupling of $N$ atoms to a single leaky cavity mode, which allowed to measure the length of the cavity with a precision scaling as $1/N$, which corresponds to the above SNR $\propto N$. This scaling is the long-sought Heisenberg-limit (HL), contrasting with the $1/\sqrt{N}$ scaling characteristic of the standard, classical averaging regime, also called standard quantum limit (SQL). \\ \begin{figure} \centering\includegraphics[width=4cm]{ClassicalAverSchema.pdf} \centering\includegraphics[width=4cm]{CohAverSchema.pdf} \caption{Classical averaging (left) versus coherent averaging (right). In coherent averaging, the $N$ probes are not read out individually and the results averaged, but one lets the probes coherently interact with a quantum bus, and then either measures the latter, or a global observable of the entire system. The parameter to be estimated can parametrize the probes, the quantum bus, or the interaction.\label{fig:cohav} } \end{figure} So far, however, the method was limited to estimating a parameter linked to the interaction of the $N$ probes with the quantum bus. This makes comparison of the performance with and without the coupling to the quantum bus impossible, as in the latter case the parameter to be estimated does not even exist. In the present work we go several steps further. Firstly, we extend the scheme to estimating a parameter that characterizes the probes themselves, or the quantum bus itself. Secondly, we analyze in detail conditions for the observation of the HL scaling by systematically studying strong, intermediate and weak coupling regimes. Numerical simulations are used in order to verify and extend results from analytical perturbation-theoretical calculations. Thirdly, we investigate the question which part of the system should be measured. \\ Note that achieving HL scaling of the sensitivity in coherent averaging with an initial product state is not in contradiction with the well-known no-go-theorem \cite{Giovannetti04} which is at the base of the often held believe that entanglement is necessary for surpassing the SQL. The reason is that in \cite{Giovannetti04} the Hamiltonian is assumed to be simply a sum of Hamiltonians of independent subsystems with no interactions, which is a natural assumption when coming from classical averaging. Meanwhile, however, several other ways have been found to bypass the requirements of the theorem and thus avoid the use of entanglement for HL sensitivity, notably the use of interactions (also known as non-linear scheme) \cite{luis_quantum_2007,napolitano_interaction-based_2011}, multi-pass schemes \cite{Higgins07}, or the coding of a parameter other than through unitary evolution (e.g.~thermodynamic parameters such as the chemical potential) \cite{marzolino_precision_2013}. From a perspective of complex quantum systems, the models that we study are typical decoherence models: the quantum bus may be considered an environment for the $N$ probes, or vice versa. However, in general we will assume that we can control both probes and quantum bus, and in particular prepare them in well defined initial states which we take as pure product states or thermal states. \section{Models and methodology} \subsection{Models} The systems we are interested in have the following general structure depicted in Fig.\ref{fig:cohav}. The corresponding Hamiltonian can be written as \begin{eqnarray} \label{eq:Hgen} H&=&\delta H_0 +\varepsilon H_\text{int} \\ &=& \delta \left(\sum_{i=1}^N H_i(\omega_1) +H_R(\omega_0) \right)+ \varepsilon \left(\sum_{i,\nu} S_{i,\nu}(x)\otimes R_\nu \right) \,, \label{eq:HgenSR} \end{eqnarray} where $H_0$ contains the ``free'' part (probes and quantum bus), and $H_\text{int}$ the interaction between the probes and the quantum bus. We have introduced two dimensionless parameters $\delta$ and $\varepsilon$ which we will use to reach the different regimes of strong, intermediate, and weak interaction. In the second line we specify the Hamiltonians $H_i$ for $N$ non-interacting probes which we assume to depend on the parameter $\omega_1$, and the Hamiltonian $H_R$ of the quantum bus (or ``reservoir'' in the language of decoherence theory) which depends on the parameter $\omega_0$. The interaction has the most general form of a sum of tensor products of probe-operators and quantum-bus-operators and we assume that it depends on a single parameter $x$. \\ As specific examples of systems of this type we consider spin-systems, where both the probes and the quantum bus are spins-1/2 (or qubits) and thus described by Pauli-matrices $X,Y,Z$. Without restriction of generality, we can take $H_i=\frac{\omega_1}{2}Z^{(i)}$ for the $i$-th probe, and $H_R=\frac{\omega_0}{2}Z^{(0)}$ ($\hbar=1$ throughout the paper), where the bracketed superscripts denote the subsystem, and the zeroth subsystem is the quantum bus. For the interaction we focus on two different cases: an exactly solvable pure dephasing model with \begin{equation} \label{eq:zzzz} H_\text{int}=\frac{x}{2} \sum_{i} Z^{(i)} \otimes Z^{(0)}\,, \end{equation} and a model that allows exchange of energy through an $XX$-interaction, given by \begin{equation} \label{eq:zzxx} H_\text{int}=\frac{x}{2} \sum_{i} X^{(i)} \otimes X^{(0)}\,. \end{equation} We refer to these two models as $ZZZZ$ and $ZZXX$ models. \subsection{Initial state} Given the difficulty of producing entangled states and maintaining them entangled, we consider here pure initial product states with all the probes in the same state, which may be different from the state of the quantum bus. For the spin-systems we parametrize these states as \begin{eqnarray} \label{eq:state} \ket{\psi_0}&=& \left( \bigotimes_i^N\ket{\varphi}_i \right) \otimes \ket{\xi} \\ &=& \left( \cos(\alpha)\ket{0} +\sin(\alpha)\e{\ensuremath{ \mathrm{i} } \phi}\ket{1} \right)^{\otimes N} \nonumber\\ &&\otimes \left( \cos(\beta)\ket{0} +\sin(\beta)\e{\ensuremath{ \mathrm{i} } \varphi}\ket{1} \right)\,, \end{eqnarray} where $\ket{0},\ket{1}$ denote ``computational basis states'', i.e. $Z\ket{0}=\ket{0}$ and $Z\ket{1}=-\ket{1}$ for any spin. Eq.(\ref{eq:state}) implies that in the subspace of the probes, the initial state is a $SU(2)$ angular momentum coherent state of spin $j=N/2$. Since both initial state and the considered Hamiltonians are symmetric under exchange of the $N$ probes, this symmetry is conserved at all times, and allows for a tremendous reduction of the dimension of the relevant Hilbert space: from $2^{N+1}$ to only $2(N+1)=2(2j+1)$ dimensions. The corresponding basis in the probe-Hilbert space is the usual joint-eigenbasis $|j,m\rangle$ of total spin and its $z$-component. We will omit the label $j$ and have thus the representation of $\ket{\psi_0}$ in the symmetric sector of Hilbert space \begin{eqnarray} \label{eq:psijm} \ket{\psi_0} &=&\sum_{m=-N/2}^{N/2} \sqrt{\binom {N} {m+N/2}} \cos(\alpha)^{N/2+m}(\sin(\alpha)\e{\ensuremath{ \mathrm{i} } \phi})^{N/2-m}\nonumber\\ &&\left(\cos(\beta) \ket{m,0}+\sin(\beta)\e{\ensuremath{ \mathrm{i} }\varphi} \ket{m,1}\right)\;. \end{eqnarray} For the ZZZZ model we also consider thermal states of the probes, see eq.(\ref{eq:th}) below. In other contexts, the above models have been called spin-star models, and analyzed with respect to degradation of channel capacities and entanglement dynamics \cite{PhysRevA.81.062353,ferraro_entanglement_2009,hamdouni_exactly_2009}. \subsection{Quantum parameter estimation theory} The question of how precisely one can measure the parameters $\omega_1,\omega_0$ and $x$ is addressed most suitably in the framework of quantum parameter estimation theory (q-pet). Q-pet builds on classical parameter estimation theory, which was developed in statistical analysis almost a century ago \cite{Rao1945,Cramer46}. There one considers a parameter--dependent probability distribution $p(A,\theta)$ of some random variable $A$. The form of $p(A,\theta)$ is known, and the task is to provide the best possible estimate of the parameter $\theta$ from a sample of $n$ values $A_i$ drawn from the distribution. For this purpose, one compares different estimators, i.e.~functions $\theta_{\rm est}(A_1,\ldots,A_n)$ that depend on the measured values $A_i$ (and nothing else), and give as output an estimate $\theta_{\rm est}$ of the true value of $\theta$. Since the $A_i$ are random, so is the estimate. Under ``best estimate'' one commonly understands an estimate that fluctuates as little as possible, while being unbiased at the same time. \\ In quantum mechanics (QM), the task is to estimate a parameter $\theta$ that is coded quite generally in a density matrix, $\rho(\theta)$. One has then the additional degree of freedom to measure whatever observable (or more generally: positive-operator valued measure (POVM)\cite{Peres93}). The so-called quantum Cram\'{e}r-Rao bound is optimized over all possible POVM measurements and data analysis schemes in the sense of unbiased estimators. It gives the smallest possible uncertainty of $\theta_{\rm est}$ no matter what one measures (as long as one uses a POVM measurement --- in particular, post selection is not covered, see \cite{braun_precision_2014} for an example), and no matter how one analyzes the data (as long as one uses an unbiased estimator). At the same time it can be reached at least in principle in the limit of a large number of measurements. The quantum Cram\'er-Rao bound (QCR) has therefore become the standard tool in the field of precision measurement. It is given by \begin{equation}\label{qfi_inegality} \text{Var}(\theta_\text{est}) \geq \frac{1}{M I_\theta}\quad \;, \end{equation} where Var($\theta_\text{est}$) is the variance of the estimator, $I_\theta$ the Quantum Fisher Information (QFI), and $M$ the number of independent measurements. A basis--independent form of $I_\theta$ reads \cite{Paris09} \begin{equation}\label{QFI_ind} I_\theta =2 \int_0^\infty ds {\rm tr}{\partial_\theta \rho_\theta \e{-\rho_\theta s} \partial_\theta \rho_\theta \e{-\rho_\theta s}}. \end{equation} In the eigenbasis of $\rho_\theta$, i.e.~for $\rho_\theta=\sum_r p_r \dens{\psi_r}{\psi_r}$ we obtain \begin{equation}\label{QFI_diag} I_\theta =\sum_r \frac{(\partial_\theta p_r)^2}{p_r} +2 \displaystyle\sum_{\substack{n,m }}\frac{(p_n-p_m)^2}{p_n+p_m} \left| \pscal{\psi_n}{\partial_\theta \,\psi_m} \right|^2 \;, \end{equation} where the sums are over all $r$ and $n,m$ such that the denominators do not vanish. It is possible to give a geometrical interpretation to the QFI, namely in terms of statistical distance. To this end one defines the Bures distance between two states $\rho$ and $\sigma$ as \begin{equation} d_B(\rho,\sigma)=\sqrt{2}\sqrt{1- \text{tr}[(\rho^{1/2} \sigma \rho^{1/2})^{1/2}] )} \;. \end{equation} In the case of two pure states $\phi$, $\psi$, we have \begin{equation} d_B(\ket{\phi},\ket{\psi})=\sqrt{2}\sqrt{1-\vert \pscal{\phi}{\psi} \vert )} \;. \end{equation} The Bures distance was shown to be related to the QFI by \cite{Braunstein94} \begin{equation} I_\theta= 4 d_B^2 (\rho(\theta),\rho(\theta+d\theta))/d_\theta^2 \;. \end{equation} It provides an intuitive interpretation to the best sensitivity with which a parameter can be estimated in the sense that what matters is how much two states distinguished by an infinitesimal difference in the parameter $\theta$ differ, where the difference is measured by their Bures distance. In the case of a pure state, the QFI is equal to \begin{equation} I_\theta=4(\pscal{\partial_\theta \,\psi(\theta)}{\partial_\theta \,\psi(\theta)}-\vert\pscal{\psi(\theta)}{\partial_\theta \,\psi(\theta)} \vert^2 )\;. \label{eq:buresD} \end{equation} \subsection{Perturbation theory} It is clear that the model (\ref{eq:Hgen}) cannot be solved in all generality. One way of making progress is to use perturbation theory. This can be done in two ways: In the standard use of perturbation theory one solves the Schr\"odinger equation for the free Hamiltonian $H_0$ and then treats the interaction $H_{\rm int}$ in perturbation theory, provided that the interaction is small enough. In the regime of strong interaction, one can do the opposite thing: solve the pure interaction problem first, and then calculate the additional effect of the free Hamiltonian as a perturbation. Formally this does not make a big difference. More important, already on the level of the expression for the QFI, is the question whether the parameter to be estimated enters in the perturbation or in the dominant part of the Hamiltonian. We call the perturbation theory relevant for these two cases PT1 and PT2, respectively.\\ To better understand the difference between PT1 and PT2, consider a Hamiltonian containing two parts where one of them depends on a parameter that we want to estimate, \begin{equation} H(\theta)=H_1(\theta)+H_2 \;,\label{Hlt} \end{equation} and the state \begin{equation} \ket{\psi(\theta)}=\exp(- \ensuremath{ \mathrm{i} } t H(\theta))\ket{\psi_0}\;. \end{equation} In PT1 we switch to the interaction picture with respect to $H_2$, \begin{equation} H_{1,I}(\theta,t)=\e{ \ensuremath{ \mathrm{i} } t H_2}H_1(\theta)\e{ - \ensuremath{ \mathrm{i} } t H_2}\;. \end{equation} Under the conditions that $\vert H_{1,I} t\vert , \vert {H_{1,I}}'t \vert \ll 1 $ and $||H_1(\theta)||\ll ||H_2||$ we can use second order perturbation theory in order to calculate the QFI \cite{Braun11}, \begin{equation}\label{pertQFI} I_\theta= 4 \int_0^t \int_0^t dt_1 dt_2 K_{\ket{\psi_0}}({H_{1,I}}'(\theta,t_1),{H_{1,I}}'(\theta,t_2))\;, \end{equation} with $K_{\ket{\psi}}(A,B)=\moyvec{AB}{\psi}-\moyvec{A}{\psi}\moyvec{B}{\psi}$. If the free Hamiltonian commutes with the interaction Hamiltonian (as is the case in the ZZZZ model), and under the assumptions that $ [H_1(\theta),{H_1(\theta)}']=0$, one can calculate the QFI exactly, \begin{equation} I_\theta =4 t^2 K_{\ket{\psi_0}}({H_1}'(\theta),{H_{1}}'(\theta))\;. \end{equation} The last term of the right hand side is the variance of ${H_{1}}'(\theta)$ in the initial state, and we thus recover a well known result in Q-pet \cite{braunstein_generalized_1996}. At the same time, eq.(\ref{pertQFI}) tells us that the QFI is of second order in the perturbation.\\ In PT2, we do the opposite: estimate the parameter linked to the Hamiltonian that dominates. With the notation of eq.(\ref{Hlt}) this means that we try to estimate the parameter $\theta$, considering that $\vert H_{2,I}t \vert , \vert {H_{2,I}}'t \vert \ll 1 $ and $||H_1(\theta)||\gg ||H_2||$, with \begin{equation} H_{2,I}(\theta,t)=\e{ \ensuremath{ \mathrm{i} } t H_1(\theta)}H_2\e{ - \ensuremath{ \mathrm{i} } t H_1(\theta)}\;. \end{equation} In this case, the result (\ref{pertQFI}) does not apply anymore. Indeed, to obtain (\ref{pertQFI}) we calculated the overlap $\pscal{\psi(\theta,t)}{\psi(\theta + d\theta,t)}$, which equals $\pscal{\psi_{1,I}(\theta,t)}{\psi_{1,I}(\theta+d\theta,t)}$. Now we need \begin{equation}\label{over_2} \bra{\psi_{2,I}(\theta,t)}\e{\ensuremath{ \mathrm{i} } t H_1(\theta)}\e{- \ensuremath{ \mathrm{i} } t H_1(\theta+d\theta)}\ket{\psi_{2,I}(\theta+d\theta,t)} \,. \end{equation} Here we have defined $$\ket{\psi_{j,I}(\theta,t)}=T\left[\exp(-\ensuremath{ \mathrm{i} }\int_0^tH_{j,I}(t')dt')\right]\ket{\psi_0}$$ for $j=1,2$, and $T$ is the time-ordering operator. In this case, the lowest order term that appears in the expansion of the QFI is the {\em unperturbed} term, \begin{equation} I_\theta = 4 t^2 \left( \moyvec{{H_1}'(\theta)^2}{\psi_0}-\moyvec{{H_1}'(\theta)}{\psi_0}^2 \right).\label{PT20} \end{equation} The formal range of validity is now given by $\vert H_{2,I} t\vert , \vert {H_{2,I}}'t \vert \ll 1 $ and $||H_1(\theta)||\gg ||H_2||$. The first and second order terms are too cumbersome to be reported here. But since in the formal range of validity of the perturbation theory they have to remain small in comparison to the zeroth order term, the scaling of the QFI with $N$ will be given anyhow by the zeroth order term (\ref{PT20}). For practical applications and in the spirit of the original ``coherent averaging'' scheme, we are also interested in the sensitivity that can be achieved by only measuring the quantum bus. To this end, we calculate the reduced density matrix by tracing out the probes, and then the QFI for the corresponding mixed state which we call ``local QFI'' $I_\theta^{(0)}$. Since tracing out a subsystem corresponds to a quantum channel under which the Bures distance is contractive \cite{Bengtsson06}, we have $I_\theta\ge I_\theta^{(0)}$. In general, the calculation of the QFI for a mixed state is rather difficult, as one has to diagonalize the density matrix twice (either for two slightly different values of the parameter for calculating the derivatives (\ref{QFI_diag}), or for calculating $\rho^{1/2}$ and $(\rho^{1/2}\sigma\rho^{1/2})^{1/2}$). Techniques for bounding the QFI for mixed states have been developed in the literature \cite{escher_general_2011,Kolodynski10}. Here, we only calculate the QFI for the mixed state of a single qubit, which is easily achievable numerically. Another important practical question is with which measurement the optimal sensitivity can be achieved. In principle, the answer can be found from the QFI formalism \cite{Braunstein94,Paris09}. Here we follow the strategy of considering local von Neumann measurements of the quantum bus and comparing the achievable sensitivity to the optimal one. If a von Neumann measurement with a Hermitian operator $B$ is performed, one can show that an estimator $\theta_{\rm est}(B_1,\ldots,B_M)=f^{-1}(\sum_{i=1}^MB_i/M)$ with $f(\theta)=\langle\psi(\theta)| B|\psi(\theta)\rangle$ leads to first order in the expansion of $f^{-1}$ to an uncertainty (standard deviation) of $\theta_{\rm est}$ given by \begin{equation} \label{eq_deltaE} \delta_\theta^B= \frac{\sqrt{{\rm Var}(B)}}{\sqrt{M}\vert \partial_\theta \moy{B} \vert}\; \end{equation} with ${\rm Var}(B)=\moy{B^2}-\moy{B}^2$ (see eq.(9.16) in \cite{kay_fundamentals_1993}). This ``method of the first moment'' can always be rendered locally unbiased by adding a shift to $\theta_{\rm est}$. The uncertainty $\delta_\theta^B$ corresponds to the minimal change of $\theta$ that shifts the distribution of the $B_i$ by one standard deviation, assuming that the shift is linear in $d \theta$ for small $d\theta$. Since $\delta_\theta^B$ is based on a particular estimator, we have $(\delta_\theta^B)^{-2}\le M I_\theta$. We call the local observable of the quantum bus $A^{(0)}=\indentit^{\otimes N}\otimes A$, and $\delta_{x}^{A^{(0)}}$ and $\delta_{\omega_1}^{A^{(0)}}$ the corresponding uncertainties for the estimation of $x$ and $\omega_1$. The QFI of the reduced density matrix for the quantum bus alone will be denoted $I_{x}^{(0)}$ and $I_{\omega_1}^{(0)}$. We have $MI_\theta\ge MI_\theta^{(0)}\ge (\delta_\theta^{A})^{-2}=(\delta_\theta^{A^{(0)}})^{-2}$. The last step follows from $\langle A^{(0)}\rangle=\langle A\rangle $ for any quantum state. \subsection{Numerics} When $\varepsilon H_{\rm int}$ and $\delta H_0$ are of the same order, both forms of perturbation theory typically break down. Unless one has an exactly solvable model (such as the ZZZZ model), one has to rely on numerics. In addition, we use numerics to test all our analytical results. The perturbative results are, in general, limited to a finite range of $N$, such that when one wants to make a statement about the scaling of the sensitivity of a measurement with $N$ for large $N$, one has to rely once more on numerics. All numerical calculations use one of the spin-Hamiltonians, eq.(\ref{eq:zzzz}) or (\ref{eq:zzxx}). \\ The numerical results are obtained by calculating the time evolution operator $U(t)=\exp(-\ensuremath{ \mathrm{i} } H t)$ for the full Hamiltonian in the Schr\"odinger picture, propagating the initial state (\ref{eq:psijm}) for two slightly different values of the parameter we are interested in ($x$, $\omega_1$, or $\omega_0$), obtain from this a numerical approximation of the derivatives of $|\psi(t)\rangle$ with respect to the parameter, and then calculate the overlaps in eq.(\ref{eq:buresD}). In this way, we obtain the ``global QFI'', which is relevant if one has access to the entire system (i.e.~probes and quantum bus). To check the stability of the numerical derivative, we calculate numerical approximations of the derivative for two different changes in the value of the parameter, $10^{-8}$ and $10^{-6}$. For the spin-Hamiltonians considered, the reduced density matrix of the quantum bus is the density matrix of a single spin-1/2 which simplifies the calculation of the QFI. For numerical calculations we use the basis--independent form (\ref{QFI_ind}) of the QFI and perform the integral analytically. We also calculated $\delta_{\omega_1}^{A^{(0)}}$ and $\delta_{x}^{A^{(0)}}$ numerically ``exactly'' by directly evaluating (\ref{eq_deltaE}).\\ In order to check the validity of the perturbative result, we verified that in the range of validity of the perturbation theory the difference between the exact QFI and the perturbative result scales as $\delta^3$ or $\varepsilon^3$ as function of the perturbative parameter $\delta$ or $\varepsilon$. \section{Results} We now present our results for the estimation of $x$, $\omega_0$, and $\omega_1$ in the different regimes, focusing first on the global QFI. All figures shown have the parameters $\omega_0=1$, $\omega_1=1$, $t=1$, $x=1$. The initial pure state (\ref{eq:state}) is taken always with $\alpha=\pi/3$, $\beta=\pi/6$, $\phi=3\pi/8$, $\varphi=5\pi/8$ unless otherwise indicated. \subsection{Global QFI} \subsubsection{Estimation of $x$} The perturbation theory for estimating $x$ for small interactions was developed in \cite{Braun11}. Inserting the form (\ref{eq:HgenSR}) in eq.(\ref{pertQFI}), one finds that for identical and identically prepared systems $\mathcal{S}_i$ ($S_{i,\nu}=S_{\nu}$ and $\ket{\varphi}_i =\ket{\varphi}$) the correlation function in eq.(\ref{pertQFI}) is given to lowest order in $\varepsilon$ b \begin{multline} K_{\ket{\psi_0}}({H_{{\rm int},I}}'(x,t_1),{H_{{\rm int},I}}'(x,t_2))=\\ \varepsilon^2 \sum_{\mu,\nu}\lbrace N K_{\ket{\varphi}}\left({S_\nu}'(x,t_1),{S_\mu}'(x,t_2) \right) \moyvec{R_\nu(t_1)R_\mu(t_2)}{\xi} \\ + \left. N^2 \moyvec{{S_\nu}'(x,t_1)}{\varphi} \moyvec{{S_\mu}'(x,t_2)}{\varphi}K_{\ket{\xi}}\left( R_\nu(t_1), R_\mu(t_2) \right) \right\rbrace \;.\label{PT1x} \end{multline} We have defined $H_{{\rm int},I}=U_0(t)\varepsilon H_{\rm int}U_0(t)^\dagger $ with $U_0(t)=\exp(\ensuremath{ \mathrm{i} } \delta H_0 t)$, and $S_\mu(x,t)=U_0(t)S_\mu U_0(t)^\dagger $, $R_\mu(t)=U_0(t)R_\mu U_0(t)^\dagger $. This implies a structure $I_x =\varepsilon^2( n_{1,x} N+ n_{2,x} N^2) +{\cal O}(\varepsilon^3)$ of the QFI, where $n_{1,x}$ and $n_{2,x}$ can be expressed in terms of time integrals of correlation functions. However, higher orders in $\varepsilon$ limit the formal validity of the perturbation theory to sufficiently small values of $N$. Indeed, the next higher order may contain terms of the order $\varepsilon^3 N^3$, which are only much smaller than the second order for $N\ll 1/\varepsilon$. Eq.(\ref{PT1x}) allows one to establish the condition for HL scaling, namely that \cite{Braun11} \begin{equation*} \begin{aligned} \int_0^t \int_0^t dt_1 dt_2 \sum_{\mu,\nu} \moyvec{{S_\nu}'(x,t_1)}{\varphi} \moyvec{{S_\mu}'(x,t_2)}{\varphi}\\ \times K_{\ket{\xi}}\left( R_\nu(t_1), R_\mu(t_2) \right)\ne 0. \end{aligned} \end{equation*} Numerics for the ZZXX model confirms the perturbative result in its expected range of validity. Moreover, it also indicates that the HL scaling works beyond the formal range of validity of the PT. This is shown in Fig.\ref{fig:QFIglobal_x}, where we compare the global QFI for measuring $x$ for weak, medium, and strong interactions. \begin{figure*} \centering\includegraphics[width=5cm]{est_x_fct_N_qfi_globale_e_001} \centering\includegraphics[width=5cm]{est_x_fct_N_qfi_globale_e_1} \centering\includegraphics[width=5cm]{est_x_fct_N_qfi_globale_e_100_PT2} \caption{From left to right: Global QFI for the ZZXX model for $x$ for weak, medium, and strong interactions ($\varepsilon=0.001,\,\,\, 1$, $\varepsilon=100$, and $\delta=1$). Blue X-symbols: exact numerical results. Purple circles: perturbative result (PT1). Red diamonds: zeroth order (unperturbed) term in PT2. The dashed orange (resp.~green continuous) lines represent $f(N) \propto N^2$ (resp.~$N$). \label{fig:QFIglobal_x} } \end{figure*} We see that PT1 works correctly for $\varepsilon N\ll 1$. For medium and strong interactions, PT1 still predicts a scaling of the global QFI proportional to $N^2$. While this is confirmed by the exact numerical results, the prefactors differ outside the formal range of validity of perturbation theory. The $N^2$ scaling is more easily observed for strong interactions than for weak ones, but Fig.\ref{fig:QFIglobal_x} shows that even for weak and medium interactions a $N^2$ component is already present. The onset of this behavior can clearly be identified in Fig.\ref{fig:QFIglobal_x} for $\varepsilon=0.001$ and $\varepsilon=1$. \\ For strong interactions, PT2 is appropriate for obtaining the QFI for $x$. The zeroth term (\ref{PT20}) dominates in the range of validity of the perturbation theory, and leads to \begin{equation} I_{x}=\delta^0\varepsilon^2 t^2 \left( N^2 \sin^2(2 \alpha) \cos^2(2 \beta) +N \cos^2(2 \alpha) \right) +{\cal O}(\delta).\label{IxPT2} \end{equation} This implies HL scaling in the formal range of validity ($N \delta \ll 1$ and $\delta\ll \varepsilon$). Figure \ref{fig:QFIglobal_x} shows that eq.(\ref{IxPT2}) works well even beyond this regime. A more precise assessment of the range of validity has to consider the matrix norm of $H_{0}t$. If the largest absolute eigenvalue of $H_i$ is $\lambda_{\rm max}$, then PT2 is expected to work for $N \lambda_{\rm max}t\delta \ll 1$ and $\delta\ll \varepsilon$. \subsubsection{Estimation of $\omega_1$} The situation is similar for estimating $\omega_1$. PT1 (i.e.~treating $\delta H_0$ as perturbation, such that in the interaction picture $ H_{0,I} = U_{\rm int}(t)\delta H_0 U_{\rm int}(t)^\dagger $ with $U_{\rm int}(t)=\e{ \ensuremath{ \mathrm{i} } \varepsilon t H_\text{int}} $), and assuming that $\com{R_\nu}{R_\mu}=0, \forall \nu,\mu$, leads to a correlation function to lowest order in $\delta$ given by \begin{equation}\label{Iw1PT1} \begin{aligned} K_{\ket{\psi_0}}&({H_{0,I}}'(\omega_1,t_1),{H_{0,I}}'(\omega_1,t_2))= \\ & \delta^2\Big\{N \bra{\xi}\left(K_{\ket{\varphi}}\left(H_{i,I}^{(0)}(\omega_1,t_1) ,H_{i,I}^{(0)}(\omega_1,t_2) \right) \right)\ket{\xi} \\ +& N^2 K_{\ket{\xi}}\left( \moyvec{H_{i,I}^{(0)}(\omega_1,t_1) }{\varphi}, \moyvec{H_{i,I}^{(0)}(\omega_1,t_2) }{\varphi} \right)\Big\} \;; \end{aligned} \end{equation} with $ H_{i,I}^{(0)}(\omega_1,t_1) =U_{\rm int}(t_1) {H_i}'(\omega_1)U_{\rm int}^\dagger(t_1)$. Note that $H_{i,I}^{(0)}(\omega_1,t_1)$ is still an operator on the quantum bus after sandwiching it between probe states $\ket{\varphi}$. Eq.(\ref{Iw1PT1}) together with (\ref{pertQFI}) shows that $I_{\omega_1} $ obeys HL scaling for $\delta\, N \ll 1$ It is easier to observe HL scaling for $\delta \ll 1$, i.e.~in the regime of small free Hamiltonian or, equivalently, strong interactions, see Fig.\ref{fig:QFIglobal_w1}. For medium interactions ($\delta=1$), HL scaling is still observed, whereas for weak interactions ($\delta=100$) SQL scaling prevails at least up to $N=2000$. Formally, the range of validity of PT1 is limited here to $N\ll 1/\delta$, but numerics indicates HL scaling up to much larger $N$. \begin{figure*} \centering\includegraphics[width=5cm]{est_w1_fct_N_qfi_globale_d_100_N2000_PT2} \centering\includegraphics[width=5cm]{est_w1_fct_N_qfi_globale_d_1} \centering\includegraphics[width=5cm]{est_w1_fct_N_qfi_globale_d_001} \caption{From left to right: Global QFI for the ZZXX model for $\omega_1$ for weak, medium, and strong interactions ($\delta=100,1$, $\delta=0.001$, and $\varepsilon=1$) . Blue X-symbols: exact numerical results. Purple circles: perturbative result (PT1). Red diamonds: zeroth (unperturbed) term in PT2. The dashed orange (resp.~green continuous) lines represent $f(N) \propto N^2$ (resp.~$N$). \label{fig:QFIglobal_w1} } \end{figure*} In the formal range of validity, PT1 gives the necessary condition $\com{S_{i,\nu}}{H_i}\ne 0$ for observing HL scaling, as otherwise $H_{i,I}^{(0)}$ becomes proportional to the identity operator in the quantum bus Hilbert space. Numerically it can be checked that a violation of this condition indeed leads only to SQL scaling. We verified this for the ZZZX model, defined as the ZZXX model, but with a Hamiltonian $S_i=\frac{x}{2}Z^{(i)}$ instead of $S_i=\frac{x}{2}X^{(i)}$ and with $\delta=0.001,0.1,1$ and $\delta=100$ for several random initial states. For weak interactions, using PT2, one finds a QFI with the structure $I_{\omega_1} = a_0\varepsilon^0 N + a_1\varepsilon^1 N^2 + a_2\varepsilon^2 N^3 +{\cal O}(\varepsilon^3)$ with some coefficients $a_i$. In the range of formal validity, the dominating term is $\varepsilon^0 N$. This once more implies that SQL scaling dominates the estimation of $\omega_1$ for weak interactions, in agreement with the left plot in Fig.\ref{fig:QFIglobal_w1}. \subsubsection{Estimation of $\omega_0$} In Fig.\ref{Fig.ZZXXIw0} we show numerical results for $I_{\omega_0}$ for the ZZXX model. We see that for $\omega_0$ the coupling of additional qubits to the central one not only does not improve the best possible sensitivity for $N$ larger than a number of order one, but in general even deteriorates it. A perturbative analysis in the framework of PT1 is not helpful here, as $H_{R,I}(\omega_0)$ (interaction picture with respect to $\varepsilon H_{\rm int}$) is an operator that acts non-trivially in the full Hilbert space. \begin{figure*} \centering\includegraphics[width=5cm]{est_w0_fct_N_qfi_globale_d_100} \centering\includegraphics[width=5cm]{est_w0_fct_N_qfi_globale_d_1} \centering\includegraphics[width=5cm]{est_w0_fct_N_qfi_globale_d_001} \caption{From left to right: Global QFI for the ZZXX model for $\omega_0$ for weak, medium, and strong interactions ($\delta=100$, $1$, $\delta=0.001$, and $\varepsilon=1$). Blue X-symbols: exact global QFI for $ \omega_0$. The dashed orange (resp.~green continuous) lines represent $f(N) \propto N^2$ (resp.~$N$). Same state as in Fig.\ref{fig:QFIglobal_w1}. }\label{Fig.ZZXXIw0} \end{figure*} \subsection{Local QFI and local quantum bus observable} As we have seen in the last section, the global QFI indicates that HL scaling can be observed with a Hamiltonian of the form (\ref{eq:HgenSR}) and an initial product state. We now investigate whether it is enough for achieving the HL to measure the quantum bus. To this end, we calculate the QFI of the reduced density matrix of the quantum bus, as well as the uncertainties of the parameter estimates based on a specific observable $A$ of the quantum bus. We do not investigate further the estimation of $\omega_0$ as already the global QFI shows that the sensitivity cannot be improved by coupling to additional qubits. \subsubsection{Estimation of $x$} The behavior of $\delta_{x}^{A^{(0)}}$ was analyzed in second order perturbation theory in \cite{Braun11}. Within its range of validity, HL scaling was found under the condition of a noiseless observable of the quantum bus that remains noiseless without interaction with the probes. Here we relax the conditions and give a more general form in the appendix, eqs.(\ref{eq:varA},\ref{eq:dA}) together with (\ref{eq_deltaE}). Fig.\ref{loc_est_Ix_petit} shows HL scaling of the sensitivity for weak interactions ($\varepsilon=0.001$) for $N\lesssim 500$ and the measurement of the quantum bus $A^{(0)}=(X^{(0)}+Z^{(0)})/2)$. The perturbative result for $(\delta_{x}^{A^{(0)}})^{-2}$ agrees perfectly well with the exact numerical result in this regime. We also see that the local QFI provides an upper bound to $(\delta_{x}^{A^{(0)}})^{-2}$. However, the local QFI is rather small in the range of $N$ accessible to exact numerical evaluation, such that at least for these values of $N$ the observed HL scaling is not of much use. For larger $\varepsilon$, PT1 quickly breaks down, as is shown in Fig.\ref{loc_est_Ix_petit} for medium and strong interactions: For $\varepsilon=0.1$ the break down occurs at $N\simeq 10$, compatible with $N\varepsilon\simeq 10$. For $\varepsilon=100$, PT1 is already invalid in the sense that the QFI becomes negative at $N=1$ and we do not plot it. Moreover, the exact numerical values both for $1/\delta_{x}^{A^{(0)}}$ and $I_{x}^{(0)}$ show that for strong and medium interactions the achievable sensitivity through the measurement of the quantum bus alone {\em deteriorates} with increasing $N$ for large enough $N$. \begin{figure*} \centering\includegraphics[width=5cm]{est_x_fct_N_qfi_locale_e_001} \centering\includegraphics[width=5cm]{est_x_fct_N_qfi_locale_e_01} \centering\includegraphics[width=5cm]{est_x_fct_N_qfi_locale_e_100} \caption{Local QFI and inverse squared uncertainties of $x$ based on the local observable $A^{(0)}=(X^{(0)}+ Z^{(0)})/2)$ for the ZZXX model for weak, medium, and strong interactions, $\varepsilon=0.001,\,\,\, 0.1$, $\varepsilon=100$, and $\delta=1$ from left to right; Blue X-symbols: exact numerical result for $I_x^{(0)}$. Purple circles: perturbative solution for $(\delta_x^{A^{(0)}})^{-2}$. Red crosses: exact solution for $(\delta_x^{A^{(0)}})^{-2}$. The dashed orange (resp.~green continuous) lines represent $f(N) \propto N^2$ (resp.~$N$). Same state as in Fig.\ref{fig:QFIglobal_w1}. }\label{loc_est_Ix_petit} \end{figure*} \subsubsection{Estimation of $\omega_1$} A perturbative result for $\delta_{\omega_1}^{A^{(0)}}$ could be obtained in the case $\com{A^{(0)}}{R_\nu}=0$, $\com{H_R}{R_\nu}=0, \forall \nu$, and $\com{R_\nu}{R_\mu}=0, \forall \nu,\mu$. The first condition avoids that $A^{(0)}$ in the interaction picture acts non-trivially in the full Hilbert space. The second and third conditions avoid that $H_{0}$ in the interaction picture acts non-trivially in the full Hilbert space. However, all three assumptions together lead to a diverging $\delta_{\omega_1}^{A^{(0)}}$, as they imply $\partial_{\omega_1}\langle A^{(0)}\rangle=0$. Numerically we can explore a more general case where these conditions are relaxed. The results are shown in Fig. \ref{loc_est_Iw_petit}. For strong and medium interaction ($\delta=0.001$ and $\delta=1$), while the global QFI shows HL scaling, the local QFI goes to zero, leading to the impossibility to estimate $\omega_1$ by a local measurement. For weak interactions ($\delta=100$), the local QFI shows neither clear HL nor SQL scaling. Nevertheless, the SQL behavior of the global QFI sets an upper bound on the local QFI. \begin{figure*} \centering\includegraphics[width=5cm]{est_w1_fct_N_qfi_locale_d_100} \centering\includegraphics[width=5cm]{est_w1_fct_N_qfi_locale_d_1} \centering\includegraphics[width=5cm]{est_w1_fct_N_qfi_locale_d_001} \caption{Local QFI and inverse squared uncertainties of $\omega_1$ based on the local observable $A^{(0)}=(X^{(0)}+Z^{(0)})/2$ for the ZZXX model for weak, medium, and strong interactions, $\delta=100,\,\,\, 1$, $\delta=0.001$, and $\varepsilon=1$ from left to right. Blue X-symbols: exact numerical result for $I_x^{(0)}$. Purple circles: perturbative result for $(\delta_x^{A^{(0)}})^{-2}$. Red crosses: exact result for $(\delta_x^{A^{(0)}})^{-2}$. The dashed orange (resp.~green continuous) lines represent $f(N) \propto N^2$ (resp.~$N$). The saturation at $~10^{-16}$ reached in the right plot corresponds to the numerical precision. Same state as in Fig.\ref{fig:QFIglobal_w1}. }\label{loc_est_Iw_petit} \end{figure*} \subsection{Exact results for the ZZZZ model} \subsubsection{Pure product state} In order to corroborate the above results we calculated the QFI for the different parameters and $\delta_{x}^{A^{(0)}}$, $I_x^{(0)}$, and $I_{\omega_1}^{(0)}$ exactly for the ZZZZ model. The expressions of the global QFI for the initial state (\ref{eq:psijm}) are given by \begin{eqnarray} I_x&=&N^2 t^2 \varepsilon^2 \cos^2(2\alpha) \sin^2(2\beta)+N t^2 \varepsilon^2 \sin^2(2\alpha) \;\label{eq:QFIxgl}\\ I_{\omega_1}&=&N\delta^2t^2\sin^2(2\alpha)\label{eq:QFIw1gl}\\ I_{\omega_0}&=&\delta^2t^2\sin^2(2\beta)\,. \end{eqnarray} $I_x$ clearly shows HL scaling as long as $\cos(2\alpha)\sin(2\beta)\ne 0$, while $\omega_1$ can only be measured with a sensitivity that scales as the SQL. The best possible estimation of $\omega_0$ does not profit from the quantum probes at all as $I_{\omega_0}$ is independent of $N$ and as for the ZZXX model we do not investigate it any further. All global QFIs show a scaling $\propto t^2$, demonstrating that the sensitivity per square root of Hertz can still be improved by measuring for longer times, in contrast to the typical time dependence of classical averaging. The general results for the local quantities are cumbersome with the exception of $I_{\omega_1}^{(0)}$ which vanishes for all initial states (\ref{eq:state}) as the reduced density matrix of the quantum bus does not depend on $\omega_1$, see eq.(\ref{reducedrho}) in the Appendix. For the estimation of $x$ we give the reduced density matrix and the uncertainty obtained via a measurement of $X^{(0)}$ in the appendix. Here, we provide results for two specific initial states. The most favorable case for the estimation of $x$, $\alpha=0$, $\beta=\pi/4$, i.e. \begin{equation} \ket{\psi_0}=\ket{0}^{\otimes N}\otimes \left(\ket{0}+\ket{1} \right)/ \sqrt{2} \;, \label{favori} \end{equation} leads to the global QFI $I_x = N^2 \varepsilon^2 t^2$. For the local QFI we have $I_x^{(0)} = N^2 \varepsilon^2 t^2$. We notice that $I_x=I_x^{(0)}$, i.e.~restricting ourselves to a measurement of the quantum bus does not affect the best possible sensitivity for a estimation of $x$, and that precision follows HL scaling. Moreover, one can easily show that the corresponding QCR bound is reachable by measuring $X^{(0)}$. Now consider the initial state with $\alpha=\pi /4$ and $\beta=\pi /4$, and $\phi=\varphi =0$, i.e. \begin{equation} \ket{\psi_0}=\left(\ket{0}+\ket{1}\right)^{\otimes N}\otimes \left(\ket{0}+\ket{1}\right)/2^{(N+1)/2}\;.\label{statebad} \end{equation} This is the worst pure state for measuring $x$. We obtain \begin{equation} I_x= N t^2 \varepsilon^2 \;, \end{equation} for the global QFI, i.e.~SQL scaling, and \begin{equation} I_x^{(0)} =\frac{N^2 t^2 \varepsilon^2 \tan^2(\varepsilon t x )}{ \cos(\varepsilon t x)^{-2N} - 1}.\label{eq:qfibus} \end{equation} for the QFI of the quantum bus. For $\varepsilon t x=\pi/2$ the QFI vanishes, which can be understood from the fact that the reduced density matrix does not depend on $x$. For the uncertainty of $x$ based on the measurement of $X^{(0)}$, we find the exact result \begin{equation} \left( \delta_{x, \text{exact}}^{X^{(0)}} \right)^{-2} = \frac{N^2 t^2 \varepsilon^2 \tan^2(\varepsilon t x)}{\cos(\varepsilon t x)^{-2N}\cos(\delta \omega_0 t)^{-2}-1 } \,.\label{eq:dxX0exact} \end{equation} This shows that for the initial state (\ref{statebad}), both the local QFI (\ref{eq:qfibus}) and $(\delta_{x, \text{exact}}^{X^{(0)}})^{-2}$ decay exponentially with $N$ for sufficiently large $N$, i.e.~this state is not suited for coherent averaging if we can only measure the quantum bus. It is instructive to use PT1 for calculating $\delta_{x, \text{exact}}^{X^{(0)}}$, which leads to \begin{equation} (\delta_{x, \text{pert}}^{X^{(0)}})^{-2}= \frac{N^2 t^4 \varepsilon^4 x^2}{N t^2 x^2 \varepsilon^2 +\tan^2(\delta \omega_0 t)} \;. \end{equation} If we expand $I_x^{(0)}$ in powers of $\varepsilon$, we find $I_x^{(0)} = N t^2 \varepsilon^2 +\mathcal{O}( \varepsilon^4)$. The exact result for $(\delta_{x, \text{exact}}^{X^{(0)}})^{-2}$ reflects the behavior of $I_x^{(0)}$, whereas the perturbative version, $(\delta_{x, \text{pert}}^{X^{(0)}})^{-2}$, predicts a completely different result, namely a scaling $\propto N$ for large $N$. If one stays in the range of validity of PT1, one does not notice that the uncertainty diverges. \\ Therefore, for the initial state (\ref{statebad}), the validity of the perturbative expressions for the uncertainty of an observable of the quantum bus and the local QFI {\em does} break down outside the range of validity of PT1, in contrast to the global QFI, where the perturbative expression still predicts the correct scaling behavior and only differs in the prefactor from the exact result. The decaying local QFI shows that the coherent averaging scheme does not allow one to reach HL scaling for the estimation of $x$ through a measurement of the quantum bus only. \\ In order to find out how generic the decaying local QFI is for different initial states, we investigated the dependence of the scaling on $N$ on the angle $\alpha$ that defines the state of the probes, eq.(\ref{eq:psijm}). We keep $\beta=\pi/4$, $\phi=\varphi=0$. Figure \ref{Ix_loc_fct_a} shows the local QFI for $x$ as a function of $\alpha$ and $N$. We see that when increasing $\alpha$ from zero, the QFI starts to decrease with $N$ for $N$ larger than some bound $N_0(\alpha)$, like in the case just studied ($\alpha=\pi/4$). The figure also indicates that with increasing $N$ the range of $\alpha$ leading to a non--decreasing QFI is reduced more and more. This shows that over an ensemble of initial states, a local QFI for the estimation of $x$ that decreases with $N$ is the norm, and the HL scaling for the optimal state an exception. \begin{figure} \centering\includegraphics[width=5cm]{contour_a_0_04N_1000_handcontour_contourlabel_v2} \caption{Local QFI for the ZZZZ model for $x$ as a function of $\alpha$ and $N$, with $\varepsilon=\delta=1$, $t=1$, $x=1$, $\omega_0=1$, $\omega_1 =1$, $\beta=\pi/4$, $\phi=0$, $\varphi=0$. The contours are at $I_x^{(0)}=\{10^4,10^2,1,10^{-5},10^{-10},10^{-30},10^{-60},10^{-100},10^{-150}\}$. }\label{Ix_loc_fct_a} \end{figure} \subsubsection{Thermal state for the probes} In order to answer the question how a lack of purity of the initial state affects our results, we take the $N$ probes in a thermal state \begin{equation} \rho_{\text{th}} = \frac{1}{Z}\begin{pmatrix} \e{- \beta_\text{th} \omega_1} & 0 \\ 0& \e{\beta_\text{th} \omega_1} \end{pmatrix}\label{eq:pmatrix} \end{equation} with $ Z=\e{-\beta_\text{th} \omega_1}+\e{+\beta_\text{th} \omega_1}$, $\beta_{\text{th}}=1/(k_B T)$, where $T$ is the temperature and $k_B$ the Boltzmann constant. The quantum bus is in a pure state $\ket{\psi_\text{bus}}= \cos(\beta)\ket{0} +\sin(\beta)\e{\ensuremath{ \mathrm{i} } \varphi}\ket{1}$, and the new initial state is the mixed product state \begin{equation}\label{eq:th} \rho_0 = \rho_\text{th}^{\otimes N}\otimes \dens{\psi_\text{bus}}{\psi_\text{bus}}. \end{equation} This resembles the DQC1 protocol in quantum information processing that starts with all qubits but one in a fully mixed state, but which still allows one to solve a certain task more efficiently than with a classical computer (the ``power of one qubit'') \cite{knill_power_1998,lanyon_experimental_2008}. The exact results for the global QFI read \begin{equation} \begin{aligned} I_x=\sin^2(2 \beta) \varepsilon^2 t^2 (N^2 \tanh^2( \beta_\text{th} \omega_1)+N(1-\tanh^2( \beta_\text{th} \omega_1)))\\ I_{\omega_1}=N \beta_\text{th}^2( 1-\tanh^2( \beta_\text{th} \omega_1)), \text{ and }\\ I_{\omega_0}=\delta^2 t^2 \sin^2(2 \beta). \end{aligned} \end{equation} This shows that it is possible to reach the HL scaling for the estimation of $x$ using thermal states of the probes, even though the prefactor of the $N^2$ term becomes small for large temperatures ($\beta_\text{th}\omega_1\ll 1$). The level spacing of the probes can only be estimated with a sensitivity scaling as the SQL, and the thermal probes are entirely useless for improving the estimation of the level spacing of the quantum bus. Remarkably, the reduced density matrix has the same form as the one for the pure product state (\ref{eq:state}) when setting \begin{equation} \cos^2(\alpha) =\e{-\beta_\text{th} \omega_1}/Z \;\; \text{ and } \; \sin^2(\alpha)^2=\e{\beta_\text{th} \omega_1}/Z. \end{equation} This implies that for any pure product state (\ref{eq:state}) there exists a thermal state of the probes with the same $I_x^{(0)}$ and hence the same best possible sensitivity of estimating $x$ by measuring the quantum bus. For locally estimating $\omega_1$, a thermal state of the probes is advantageous compared to the pure state (\ref{favori}) where the corresponding local QFI vanishes. The thermal state introduces a dependence on $\omega_1$ through the initial state that is absent for the pure states considered. If also the quantum bus is in a thermal state initially, the interaction strength cannot be measured in the ZZZZ model. \section{Summary} In summary, we have examined in detail a coherent averaging scheme for its usefulness of Heisenberg-limited precision measurements. In the scheme, $N$ probes that are initially in a product state, interact with a quantum bus and one measures the latter or the entire system. Combining analytical results from perturbation theory and an exactly solvable dephasing model with numerical results, we have shown that this setup allows one to measure the interaction strength and the level spacing of the probes with HL sensitivity if one has access to the entire system. Strong interactions favor better sensitivities in this case. If one has only access to the quantum bus, the results depend on the initial state, but HL sensitivity is achievable only for the interaction strength and a small set of initial states. Remarkably, for measuring the interaction strength in the exactly solvable ZZZZ model, there is a mapping of the local quantum Fisher information for thermal states of the probes to the one for pure states. Globally HL sensitivity for estimating the interaction strength can be achieved with thermal probes at any finite temperature, as long as the quantum bus can be brought into an initially pure state. The sensitivity of measurements of the level spacing of the quantum bus cannot be improved by coupling it to many probes, even with access to the entire system, and in fact deteriorates with an increasing number of probes. Altogether, our investigations have led to a broader and more detailed view of the usefulness of the coherent averaging scheme and may open the path to experimental implementation. \section{Appendices} \subsection{Uncertainty of a local observable} If one relaxes the condition used in \cite{Braun11}, namely that $A \ket{\xi}=a_\xi \ket{\xi}$ and $[A^{(0)},H_R]=0$, one finds for the variance of the observable $A^{(0)}$ \begin{eqnarray} &&\text{Var}(A^{(0)})=\moy{A^2}-\moy{A}^2 + \ensuremath{ \mathrm{i} }\varepsilon \int_0^t dt_1 N\moy{S_\nu (t_1)} \moy{[R_\nu(t_1),B ]}\nonumber\\ &&+ \varepsilon^2\int_0^t \int_0^{t_1} dt_1 dt_2\left\{ (N(N-1) \moy{S_\nu(t_1)} \moy{S_\mu(t_2)} \right.\nonumber\\ &&+N \moy{S_\nu(t_1) S_\mu(t_2)})\moy{[R_\nu(t_1),B]R_\mu(t_2)} \nonumber\\ && +(N(N-1) \moy{S_\nu(t_1)} \moy{S_\mu(t_2)}+N \moy{S_\mu(t_2) S_\nu(t_1)})\nonumber\\ &&\left. \moy{R_\mu(t_2)[B,R_\nu(t_1)]}\right\} +\varepsilon^2\int_0^t \int_0^t dt_1 dt_2 N^2 \moy{S_\nu(t_1)} \nonumber\\ &&\moy{S_\mu(t_2)}\moy{[R_\nu(t_1),A]}\moy{[R_\mu(t_2),A]} \label{eq:varA} \end{eqnarray} where $B=A^2-2\moy{A}A$, the expectation values for $A$, $B$, and $R_\mu(t)$ are taken with respect to $\ket{\xi}$, and the expectation value for $S_\mu(t)\equiv S_\mu(x,t) $ is with respect to $\ket{\varphi}$. The derivative of the mean value of $A^{(0)}$ is given by \begin{equation}\label{eq:dA} \begin{aligned} &\frac{\partial}{\partial \theta}\langle A^{(0)}\rangle = \frac{\partial}{\partial \theta} \left( \ensuremath{ \mathrm{i} } \varepsilon\int_0^t dt_1 N\moy{S_\nu (t_1)} \moy{[R_\nu(t_1),A ]}\right. \\&+ \varepsilon^2\int_0^t \int_0^{t_1} dt_1 dt_2\left\{ (N(N-1) \moy{S_\nu(t_1)} \moy{S_\mu(t_2)} \right.\\ &+N \moy{S_\nu(t_1) S_\mu(t_2)})\moy{[R_\nu(t_1),A]R_\mu(t_2)} \\ & +(N(N-1) \moy{S_\nu(t_1)} \moy{S_\mu(t_2)}+N \moy{S_\mu(t_2) S_\nu(t_1)})\\ &\left. \moy{R_\mu(t_2)[A,R_\nu(t_1)]}\right\} \Big) \;. \end{aligned} \end{equation} From these two quantities we obtain $\delta_x^{A^{(0)}}$ according to eq.(\ref{eq_deltaE}). \subsection{Local analysis of ZZZZ} The reduced density matrix $\rho^{(0)}$ for the ZZZZ model starting in a pure product state (\ref{eq:psijm}) has the matrix elements \begin{eqnarray} \rho^{(0)}_{00}&=&\cos^2(\beta)\\ \rho^{(0)}_{11}&=&\sin^2(\beta)\\ \rho^{(0)}_{01}&=&\frac{1}{2}\sin(2 \beta)\e{- \ensuremath{ \mathrm{i} } (\varphi+\delta \omega_0 t) }(\cos^2(\alpha) \e{-\ensuremath{ \mathrm{i} } \varepsilon x t}+\sin(\alpha)^2 \e{\ensuremath{ \mathrm{i} } \varepsilon x t})^N\,,\label{reducedrho} \end{eqnarray} from which one can easily compute the local QFI. The relative uncertainty for $x$ using a measurement $X^{(0)} $ is: \begin{widetext} \begin{equation} (\delta_{x, \text{pert}}^{X^{(0)}})^{2}=\frac{1-\left( \sin(2 \beta)\sum_{m=-N/2}^{N/2} \binom{N}{m-N/2}\cos(\alpha)^{N+2m} \sin(\alpha)^{N-2m} \cos( \delta \omega_0 t+\varphi+2 \varepsilon x t m) \right)^2}{\left| 2 \varepsilon t \sin(2 \beta)\sum_{m=-N/2}^{N/2} \binom{N}{m-N/2} m \cos(\alpha)^{N+2m} \sin(\alpha)^{N-2m} \sin( \delta \omega_0 t+\varphi+2 \varepsilon x t m) \right|^2} \;. \end{equation} \end{widetext}
{ "timestamp": "2015-04-14T02:15:34", "yymm": "1504", "arxiv_id": "1504.03214", "language": "en", "url": "https://arxiv.org/abs/1504.03214" }
\section{Introduction} \label{intr} There are not two identical flares, nevertheless it is useful to classify flares following some schemes. The most commonly accepted classification was introduced by \inlinecite{pal77} according to soft X-ray images obtained by the S-054 experiment on board {\em Skylab}. The authors proposed two separate classes of events, namely compact flares (class 1) and flares occurring in large and diffuse systems of loops (class 2). They found that the separation is supported by the different values of several physical parameters like height, volume, energy density, and characteristic times of rise, decay, and duration. They also perceived that flares of class 1 are located very low in active regions and, opposite to flares of class 2, do not appear to be associated with coronal mass ejections (CMEs) and prominence eruptions or activations. The division into two classes opposed to each other by contradiction is called dichotomy, therefore we can shortly call the classification of \inlinecite{pal77} as the flare dichotomy. The flare dichotomy has been supported by several classifications, {\it e.g.} impulsive {\it vs.} long-duration flares, single-loop {\it vs.} arcade flares, confined {\it vs.} eruptive flares, or two-points {\it vs.} two-ribbons flares. Beyond any doubt the division into two classes is very rough; therefore, it is possible that observed flares can share features of both class 1 and class 2. \inlinecite{sve89} introduced for them the term {\em flare hybrids}. How does a flare hybrid look like? Its evolution can be divided into two phases: during phase 1 it looks like a flare of class 1, and during phase 2 it looks like a flare of class 2. \inlinecite{sve89}\hspace*{-1.5mm}, recalling a private communication of Cornelius de Jager, discussed that a flare of class 1 may serve as a trigger of a flare of class 2. He also asked about the process which causes the magnetic field to open and thus start a flare of class 2. Further observations made with many instruments at different wavelengths have derived the more complete picture of flare hybrids. In Section~\ref{obs} we will present characteristic features of flare hybrids in soft X-ray (SXR), hard X-ray (HXR), and extreme-ultraviolet (EUV) ranges, respectively. In Section~\ref{freq} some rules concerning a frequency of occurrence will be given. The magnetic configuration suggested for flare hybrids will be described in Section~\ref{conf}. In Section~\ref{conc} the most likely scenario for a flare hybrid will be proposed. \section{Observations} \label{obs} \subsection{Soft X-rays} \label{sxr} In Figure~\ref{nov5s} we present an example of the flare hybrid, SOL1992-11-05T06:22 (M2.0), observed by the {\it Soft X-ray Telescope} (SXT: \opencite{tsu91}) on board the {\sl Yohkoh} satellite. The three SXR images made with the AlMg filter during phase 1 (Figure~\ref{nov5s}a), during phase 2 (Figure~\ref{nov5s}c), and in the intermediate time (Figure~\ref{nov5s}b) are given. As we can see, during phase 1 the SXR emission of the flare is dominated by a small ($h \approx 10^4$~km) system of bright loops. Later on, a higher magnetic arcade ($h \approx 5 \times 10^4$~km) is seen, which shines in SXRs during phase 2. In each image the borders of two areas, 1 and 2, within which the SXR signal appeared, are marked. Light curves for these areas, as well as the total signal from the full images, are presented in Figure~\ref{nov5s}d. Time gaps in the light curves are caused mainly by the satellite night. As we can see, light curves for areas 1 and 2 are different but together they compose a double-peak shape. The same double-peak light curve was recorded by the GOES satellites (Figure~\ref{nov5s}e), where the time interval of the satellite night is also marked. \begin{figure} \centerline{\includegraphics[width=1.1\textwidth]{hybrids_fig1.eps}} \hfill \caption{a)-c) The SXT/AlMg images of the flare hybrid SOL1992-11-05T06:22 (M2.0). The intensity scale is reversed. The solid line shows the solar limb, the double solid lines show the borders areas 1 and 2 shining in phase 1 and 2, respectively. d) The SXT/AlMg light curves for areas 1 (diamonds), 2 (boxes), and the total signal (crosses). e) The GOES light curves (upper curve -- 1\,--\,8 \AA\ range, lower curve -- 0.5\,--\,4 \AA\ range). The hatched areas show the {\sl Yohkoh} satellite nights. } \label{nov5s} \end{figure} However, a double-peak GOES light curve cannot be considered as a typical signature of a flare hybrid. Other example of a flare hybrid, SOL1992-09-09T18:03 (M1.9), observed by the SXT is given in Figure~\ref{sep9s}. The panels in the figure are organized in the same way as in Figure~\ref{nov5s}. The difference is the choice of another SXT filter, Al12. The evolution of the flare presented in Figure~\ref{sep9s} is very similar to that presented in Figure~\ref{nov5s}. During phase 1 the SXR emission of the flare is dominated by a smaller area 1 around a system of lower loops and during phase 2 an emission from a larger area 2 around a higher magnetic arcade dominates. Light curves for the areas 1 and 2 (Figure~\ref{sep9s}d) have their maxima shifted in time as in Figure~\ref{nov5s}d but this time they compose together an only one-peak light curve seen also in the GOES record (Figure~\ref{sep9s}e). \begin{figure} \centerline{\includegraphics[width=1.1\textwidth]{hybrids_fig2.eps}} \hfill \caption{a)-c) The SXT/Al12 images of the flare hybrid SOL1992-09-09T18:03 (M1.9). d) The SXT/Al12 light curves for areas 1, 2, and the total signal. e) The GOES light curves. For more explanations, see caption to Figure~\ref{nov5s}.} \label{sep9s} \end{figure} We investigated nine flare hybrids well-observed by {\sl Yohkoh} (Table~\ref{list}). In each case the evolution seen in SXT images looked very similar, namely during phase 1 the emission was concentrated in a system of rather small loops and during phase 2 the emission came from a larger arcade of loops. However, only the flare hybrid from Figure~\ref{nov5s} had a double-peak GOES light curve, while for the other events the GOES recorded one-peak light curves. Thus, we conclude that an intrinsic feature of a flare hybrid GOES light-curve is a rather strong asymmetry built by a short rise typical for flares of class 1 followed by a slow decay typical for flares of class 2. For events N$^0$ 1 to 6, 8, and 9 from Table~\ref{list} we found that the rise phase lasted 5 to 20 times shorter than the decay phase. \begin{table} \caption{List of investigated flare hybrids} \label{list} \begin{tabular}{cccccc} \hline No. & Date & Max. time & GOES & Coordinates & NOAA AR \\ & & [UT] & class & & \\ \hline 1 & 30-Jan-92 & 17:15 & M1.6 & S12\,E84 & 7042 \\ 2 & 8-Jul-92 & 09:50 & X1.2 & S11\,E46 & 7220 \\ 3 & 11-Aug-92 & 22:28 & M1.4 & N16\,E90+ & 7260 \\ 4 & 21-Aug-92 & 11:10 & M1.0 & N14\,W40 & 7260 \\ 5 & 6-Sep-92 & 09:07 & M3.3 & S11\,W38 & 7270 \\ 6 & 9-Sep-92 & 18:03 & M1.9 & S11\,W78 & 7270 \\ 7 & 5-Nov-92 & 06:22 & M2.0 & S18\,W90+ & 7323 \\ 8 & 9-Oct-93 & 08:11 & M1.1 & N11\,W78 & 7590 \\ 9 & 22-Sep-97 & 14:16 & C4.7 & S28\,E43 & 8088 \\ \hline \end{tabular} \end{table} We used SXT data to calculate values of some parameters averaged over two systems of loops forming the investigated flare hybrids. For this aim we used the filter ratio method \cite{har92} employing the Be119 and Al12 image pairs as the first choice or the Al12 and Al.1 image pairs, when the Be119 images were not available. In this way we obtained a set of values characterizing the evolution of the temperature, $T$, and emission measure, $EM$. We estimated the total volume, $V$, of both components, hence values of the electron density number, $N_{\rm e}=\sqrt{EM/V}$, became available. Next, the total thermal energy, $E_{\rm th} = 3\,N_{\rm e}\,k_{\rm B}\,T\,V$ was calculated, where $k_{\rm B}$ is the Boltzmann constant. Finally, the heating rate {\it per} unit volume, $E_{\rm H} = (dE_{\rm th}/dt) + E_{\rm C} + E_{\rm R}$, was calculated, where $E_{\rm C}$ are the conductive losses and $E_{\rm R}$ are the radiative losses. In Table~\ref{maxval} the maximum values for the parameters mentioned above are presented. Moreover, the Be119 light-curve parameters are summarized in columns 2 to 4, where the time of maximum, maximum value of the signal, and its full width half maximum (FWHM) are given, respectively. We estimate that the typical values of the relative errors to be smaller than 2 to 3\% for the intensity, temperature, and emission measure. For the volume and the related parameters the relative errors are definitely larger, about 15 to 25\%. As we can see, in each case large amounts of energy were released in both components of the flare hybrids. Relations between parameters characterizing phase 1 and phase 2 of particular flare hybrids can be different, however, some trends are seen. Phase 1 always occurs in a smaller magnetic structure than phase 2 and lasts shorter. The smaller volume involved in phase 1 settles its larger electron number density, higher heating rate, and smaller total thermal energy than phase 2. Compare the mean values and their standard deviations given in the bottom rows of Table~\ref{maxval}. \begin{table} \caption{Maximum values of some parameters obtained for events from Table~1} \label{maxval} \begin{tabular}{clrccrcrrc} \hline No./ & Max. & $I_{max}$ & ${\Delta}t_{1/2}$ & $V$ & $\hspace*{-3mm}T$ & $EM$ & $N_{\rm e}$ & $E_{\rm th}$ & $E_{\rm H}$ \\ phase & time & [10$^6$ & [min.] & [$10^{28}$ & [MK] & [$10^{49}$ & [$10^{10}$ & [$10^{30}$ & [ergs \\ & [UT] & DN & & cm$^3$] & & cm$^{-3}$] & cm$^{-3}$] & ergs] & cm$^{-3}$ \\ & & s$^{-1}$] & & & & & & & s$^{-1}$] \\ \hline 1/1 & 17:14 & 1.2 & 15.5 & \hspace*{1mm}0.4 & 10.1 & 1.6 & 6.0 & 1.0 & 1.1 \\ 1/2 & 17:20 & 1.1 & 25.2 & \hspace*{-0.5mm}10.4 & 10.2 & N/A & 1.1 & 4.6 & 0.3 \\ 2/1 & 09:49.5 & 14.3 & \hspace*{1.5mm}5.3 & \hspace*{1mm}0.3 & 13.5 & \hspace*{-1.5mm}13.0 & 20.5 & 3.2 & \hspace*{-1.5mm}14.0 \\ 2/2 & 10:04 & 2.0 & 19.0 & \hspace*{-0.5mm}10.9 & 16.4 & 3.2 & 1.8 & 10.8 & 1.2 \\ 3/1 & 22:27.5 & 1.4 & \hspace*{1.5mm}3.5 & \hspace*{2mm}0.15 & 12.5 & 1.4 & 9.6 & 0.6 & 5.8 \\ 3/2 & 22:34.3 & 1.4 & 17.7 & \hspace*{1mm}5.5 & 13.4 & 1.8 & 1.8 & 4.2 & 1.1 \\ 4/1 & 11:06 & 0.5 & 11.5 & \hspace*{1mm}0.3 & 9.5 & 0.6 & 4.9 & 0.6 & 0.8 \\ 4/2 & 11:14.5 & 1.0 & 32.3 & \hspace*{1mm}5.3 & 12.3 & 1.1 & 1.6 & 3.5 & 0.2 \\ 5/1 & 09:06.5 & 3.8 & \hspace*{1.5mm}5.3 & \hspace*{1mm}0.3 & 13.3 & 4.5 & 13.0 & 1.5 & \hspace*{-1.5mm}10.2 \\ 5/2 & 09:14 & 1.0 & 16.3 & \hspace*{1mm}3.6 & 13.2 & 1.4 & 2.2 & 3.5 & 0.9 \\ 6/1 & 18:02.3 & 2.7 & \hspace*{1.5mm}8.2 & \hspace*{1mm}0.7 & 11.0 & 3.2 & 6.8 & 2.0 & 1.9 \\ 6/2 & 18:21 & 2.6 & 56.5 & \hspace*{1mm}5.3 & 9.5 & 1.8 & 0.6 & 9.6 & \hspace*{2mm}0.05 \\ 7/1 & 06:21.5 & 2.8 & \hspace*{1.5mm}4.5 & \hspace*{2mm}0.45 & 12.3 & 3.3 & 8.5 & 1.6 & 4.8 \\ 7/2 & 06:41 & 1.4 & 91.0 & \hspace*{1mm}5.0 & 9.2 & N/A & 0.5 & 7.4 & \hspace*{2mm}0.07 \\ 8/1 & 08:11.7 & 1.4 & \hspace*{1.5mm}5.8 & \hspace*{2mm}0.45 & 12.3 & 1.7 & 6.2 & 1.2 & 3.6 \\ 8/2 & 08:19 & 0.6 & 19.8 & \hspace*{-0.5mm}12.8 & 11.6 & 0.8 & 0.8 & 4.1 & \hspace*{2mm}0.25 \\ 9/1 & 14:16.3 & 0.5 & \hspace*{1.5mm}5.7 & \hspace*{2mm}0.15 & 10.8 & 0.6 & 6.4 & 0.4 & 2.8 \\ 9/2 & 14:20 & 0.4 & 20.5 & \hspace*{1mm}3.8 & 11.5 & 0.5 & 1.2 & 2.0 & \hspace*{2mm}0.55 \\ \hline \multicolumn{2}{l}{phase 1 (mean)} & 3.2 & 7.3 & \hspace*{2mm}0.36 & 11.7 & 3.3 & 9.1 & 1.3 & 5.0 \\ \multicolumn{2}{l}{phase 1 (st. dev.)} & 4.3 & 3.9 & \hspace*{2mm}0.17 & 1.4 & 3.9 & 4.9 & 0.9 & 4.4 \\ \multicolumn{2}{l}{phase 2 (mean)} & 1.3 & \hspace*{-3mm}33 & \hspace*{1mm}7.0 & 11.9 & 1.5 & 1.3 & 5.5 & 0.5 \\ \multicolumn{2}{l}{phase 2 (st. dev.)} & 0.7 & \hspace*{-3mm}25 & \hspace*{1mm}3.4 & 2.3 & 0.9 & 0.6 & 3.0 & 0.4 \\ \hline \end{tabular} \end{table} Figures~\ref{nov5s}a-c and \ref{sep9s}a-c show that the areas labeled as 1 are situated near footpoints of the arcades 2. Moreover, the light curves in Figures~\ref{nov5s}d and \ref{sep9s}d show that the signal started to rise in the arcades 2 at the beginning of phase 1, which is not seen in Figures~\ref{nov5s}a and \ref{sep9s}a that are scaled to the brightest pixel. These facts strongly support the scenario -- in which the reported flares were not accidental coincidences of two events; the first one of class 1, and the second one of class 2, but rather a consequence of an interaction between the loop systems 1 and 2. We have observed similar behaviors for other investigated flare hybrids. A fundamental question arises. Is phase 2 caused by additional energy release in the arcade or is it an effect of the long plasma cooling time within the larger structure? There is no doubt that magnetic reconnection between loop systems 1 and 2 can provide heating to both systems. The presence of reconnection is supported by intense HXR emission occurring during phase 1, when SXR emission strongly rises in both systems. There is also no doubt that conductive and radiative losses are higher in system 1 than in system 2 due to smaller sizes and higher electron number densities, respectively (see Table~\ref{maxval}). The higher energy losses should justify the shorter evolution timescales for system 1 in comparison with system 2, ${\tau}_1 \ll {\tau}_2$. However, the real evolution of flare loops is a complex interplay between heating and cooling processes. \inlinecite{jak92} introduced the density-temperature ($N_{\rm e}-T$) diagram as a very useful diagnostic tool of heating process in a single flaring loop based on SXR observations. They showed that flare evolutionary paths during the decay phase on this diagram strongly depend on the duration of energy release. When the heating is switched off abruptly a cooling due to conductive and radiative losses quickly decreases the temperature causing a steep slope of the path ${\sim}2$. When the decay of the heating rate is rather slow, the cooling is much slower and the slope of the path is ${\sim}0.5$. We investigated evolutionary paths of the analyzed flare hybrids on the $\sqrt{EM}-T$ diagram. As long as we built the paths with the data from the whole flare area, the paths seem to be more complicated than those obtained for simple hydrodynamic flare models \cite{jak92}. \inlinecite{syl93} interpreted the complicated evolutionary paths as a consequence of involving a set of distinct loops within the same flaring structure. The evolutionary paths on the $\sqrt{EM}-T$ diagram composed for both components of flare hybrids separately resemble a path modeled hydrodynamically for a single loop. Moreover, the decay slopes for the component 1 and the component 2 look usually very similar suggesting a slow decay of the heating rate. It should be stressed that the same phases in the evolutionary paths are shifted in time, for example, when we observe signatures of a prolonged energy release in the system 2 (a slope of the path ${\sim}0.5$), the evolutionary path for the system 1 is finished. Unfortunately, phase 2 in the investigated flare hybrids lasted long enough to be interrupted by the satellite night. Further observations made during the next orbit in the late decay phase of flares suffered a lack of Be119 images which avoided a good temperature and emission measure diagnostics. For these reasons we cannot be sure that phase 2 of the investigated flare hybrids is caused by additional energy release in the system 2. Moreover, the available images do not allow us to identify a place of an additional reconnection. \subsection{Hard X-rays} \label{hxr} The HXR light curves in four energy channels for the flare in Figure~\ref{nov5s} (5 November 1992) are given in Figure~\ref{nov5h}. They were recorded by the {\it Hard X-ray Telescope} (HXT: \opencite{kos91}) on board the {\sl Yohkoh} satellite. In all the channels a sequence of almost equally spaced pulses, $P{\approx}13$~s, is seen. In lower energies (below 33 keV) this sequence lasts for about three minutes, between 06:19 and 06:22 UT, while in higher energies (above 33 keV) the pulses can be detected above the background only between 06:19 and 06:20 UT, due to lower count statistics. Similar pulsations, called quasi-periodic pulsations (QPP), are observed in many solar flares and it is commonly accepted that they are caused by MHD oscillations excited in flaring magnetic structures (\opencite{n+m09}, and references therein). \begin{figure} \centerline{\includegraphics[width=0.9\textwidth]{92nov05hxtr.eps}} \hfill \caption{The {\sl Yohkoh} hard X-ray (HXR) light curves in four energy ranges. The number of counts are averaged {\it per} second and {\it per} subcollimator.} \label{nov5h} \end{figure} The pulses are modulated by an additional gradual component, which is well observed below 33 keV and absent above 53 keV. It consists of two broad enhancements lasting at least two minutes each, the maxima of which are separated in time by about 90~s (06:19:50 and 06:21:20 UT, respectively). The hardness ratio of a net signal above the background for two successive channels proves that the photon energy spectra of pulses are harder than the spectra of the gradual component. Recently \inlinecite{t+s14} analyzed a similar solar flare of 2 October 2001 (SOL2001-10-02T17:31, C4.7), in which more energetic pulses with a period $P_1 = 26-31$\,s were modulated by three more gradual enhancements with a period $P_2 = 110$\,s. They found that these periods were excited simultaneously in a flare hybrid due to an interaction between a system of small loops and a high arcade of loops. They also proved that the shorter period was excited in the small loops and the longer period in the arcade. The observations available for the flare of 5 November 1992 do not allow us to separate spatially both periods, but close similarity of the overall magnetic configuration with the flare of 2 October 2001 suggests an interaction between small loops and a high arcade exciting simultaneously by MHD oscillations in both magnetic structures. We would like to stress that in some articles reporting the presence of QPP with two distinctly different periods excited simultaneously in HXRs, an interaction between two magnetic structures of different sizes is always mentioned, {\it e.g.} \inlinecite{asa01}\hspace*{-1.5mm}, \inlinecite{nak06}\hspace*{-1mm}. We investigated HXR light curves of all the flare hybrids from Table~\ref{list} looking for similarities with the flare of 5 November 1992, {\it i.e.} showing two distinctly different periods excited simultaneously. Unfortunately, some light curves were not complete due to the passage of the satellite through the South Atlantic Anomaly, therefore we incorporated the available light curves derived by the {\it Burst and Transient Source Experiment} (BATSE: \opencite{fis92}) on board the {\sl Compton Gamma Ray Observatory}. In summary, we confirm the presence of QPPs for seven flare hybrids, but only for three events two distinctly different periods were excited simultaneously. Apart from the flare of 5 November 1992, there were flares of 8 July 1992, 21 August 1992, and 9 September 1992. Four events from nine are for sure too few to claim that QPPs with two distinctly different periods excited simultaneously are intrinsic features of flare hybrids in HXRs. Nevertheless, this characteristic HXR pattern suggests an influence of the magnetic configuration of flare hybrids, in which two interacting magnetic structures have distinctly different sizes. \subsection{Extreme-ultraviolet} \label{euv} \inlinecite{woo11}\hspace*{-1.5mm}, taking advantage of new instruments: the {\it Atmospheric Imaging Assembly} (AIA: \opencite{lem12}), and the {\it Extreme-ultraviolet Variability Experiment} (EVE: \opencite{woo12}) onboard the NASA {\sl Solar Dynamics Observatory} (SDO), introduced a new class of flares, called EUV late phase (ELP) flares. Their crucial feature is an additional peak of the coronal emission seen in the spectral line of Fe~{\sc xvi} 335\AA\ occurring half an hour to two hours after the GOES SXR peak. This line is an indicator of plasma with a temperature of about 3~MK. For other EUV spectral lines detecting the warm plasma, {\it e.g.} Fe~{\sc xviii} 94\AA\ ($\sim$6 MK) or Fe~{\sc xv} 284\AA\ ($\sim$2 MK), the late peaks also occur \cite{sun13}. \inlinecite{woo14} specified other features of ELP flares, namely: (1) no significant counterpart of the late peak in the hot plasma (the GOES SXR or Fe~{\sc xx}/Fe~{\sc xxiii} 133\AA\ ), (2) an eruption followed by a coronal dimming seen in the cooler plasma, {\it e.g.} Fe~{\sc ix} 171\AA\ , preceding the late peak, and (3) a different system of longer loops visible above the place where the peak of the hot plasma was emitted. The features of ELP flares mentioned above strongly resemble those presented in Section~\ref{sxr} for flare hybrids observed in SXRs. The evolution of ELP flares consists of the two following phases: the first, when the emission is concentrated in a small system of loops (system 1), and the second, when an additional system of longer loops (system 2) occurs and its emission dominates. Moreover, an interaction between both systems undoubtedly exists. Therefore we consider the ELP flares simply as a new face of flare hybrids that we can investigate thanks to the new EUV instruments onboard the SDO. A broad wavelength range and high temporal, spatial, and spectral resolutions allow us to measure changes in many EUV spectral lines which are indicators of plasma in a wide range of temperatures. This gives us the opportunity to decide whether additional energy release or a long timescale of plasma cooling is responsible for the prolonged evolution. In several case studies regarding the ELP flares \cite{hoc12,liu13,dai13,sun13} the authors investigated sequences of AIA light curves ordered with decreasing temperature of the filters from Fe~{\sc xx}/Fe~{\sc xxiii} 133\AA\ ($\sim$10 MK) to Fe~{\sc ix} 171\AA\ ($\sim$0.7 MK). The signal was formed by selected fragments of ELP flares. The authors agree that during phase 1 in the system 1 of the investigated flares the maximum is shifted in time, first occurring for the hot plasma, later on for the warm plasma, and finally for the cold plasma. This appearance is interpreted as a consequence of plasma cooling due to radiative and conductive losses. \inlinecite{liu13} reported a similar delay during phase 2 in the system 2. They obtained time scales definitely longer than for phase 1 due to less conductive (larger loops) and radiative (lower electron density number) losses. \inlinecite{sun13} reported similar results but only for the hot and warm plasma. The evolution of the cold plasma was more complex. On the other hand, \inlinecite{dai13} obtained a more complicated picture, in which each light curve for the system 2 during phase 2 shows several maxima. They interpreted this observations as a proof of several episodes of energy release in this system. \inlinecite{hoc12} noticed a lack of the hot plasma in the system 2 during phases 1 and 2 and interpreted this fact as a proof of additional energy release in the system 2 causing only a modest increase of the temperature. The same interpretation can be used to explain double maxima visible in light curves of cold filters in the flare analyzed by \inlinecite{sun13}\hspace*{-1.5mm}. Recently, \inlinecite{li14} have published additional arguments supporting a twofold explanation of phase 2 in ELP flares. They modeled the EUV emission from sets of loops having different lengths and different heating rates using the enthalpy based thermal evolution of loops (EBTEL) model \cite{kli08}. They found that two separate maxima in Fe~{\sc xvi} 335\AA\ can be modeled by simultaneous heating during phase 1 in two distinct loops of different lengths and by repeating the heating during phase 2 in the system 2. \inlinecite{li14} pointed out the importance of the AIA UV 1600\AA\ channel to distinguish between cooling and heating for the ELP flares. Some contribution to the emission in this channel comes from the C~{\sc iv} line formed in the upper chromosphere. Therefore the AIA UV 1600\AA\ channel is very sensitive to flare energy release. Indeed, the light curves presented by \inlinecite{li14} support the interpretation of additional energy release for the flares investigated by \inlinecite{dai13} and \inlinecite{sun13} as well as a lack of energy release in phase 2 of flares investigated by \inlinecite{liu13}\hspace*{-1.5mm}. \section{How Common are Flare Hybrids?} \label{freq} We have investigated the occurrence of flare hybrids between September 1991 and April 1999. For this aim we analyzed SXR GOES light curves and qualified as flare hybrids (1) those showing a double maximum, under the condition that both maxima occurred in the same active region, or (2) those having a strongly asymmetric light curve, {\it i.e.} a fast rise and a slow decay. If possible, questionable events were verified on the basis of SXT images. In summary we identified 577 flare hybrids. Altogether in the investigated time interval 15178 flares occurred. It gives a 3.8\% contribution of flare hybrids. Figure~\ref{stat} presents the number of flare hybrids for each trimester in the investigated time interval. For comparison the total number of all flares is also given. As we can see, the number of flare hybrids roughly mimics the solar cycle phase. However, the contribution of this class in the full population was the highest around the minimum of activity (1995-1997) $\approx$9-13\%, whereas during enhanced activity (1991, 1993, 1998-99) it lowered to $\approx$2-4\%. \begin{figure} \centerline{\includegraphics[width=1.1\textwidth]{hybrids_fig3.eps}} \hfill \caption{a) Number of flare hybrids (red bins) and all flares (black bins) that occurred between September 1991 and April 1999. The size of a bin is four months. b) Ratio of flare hybrids for each trimester bin.} \label{stat} \end{figure} Recently a more comprehensive statistical research concerning ELP flares has been done by \inlinecite{woo14}\hspace*{-1.5mm}. He investigated SXR GOES light curves from 1974 to 2013 looking for a dual-decay behavior, {\it i.e.} a steep slope followed by a moderate slope. He argued that the first slope represents cooling of shorter loops (system 1), whereas the second one the cooling of longer loops (system 2). \inlinecite{woo14} obtained that the contribution of flares showing the dual-decay is 7.9\%, in particular from 2010 to 2013 it was 10.5\%. He found that the contribution of ELP flares is the highest (20-30\%) around a solar-cycle minimum. He also found that the higher the flare class, the higher the contribution of the dual-decay flares. The very important work of \inlinecite{woo14} aimed to validate the results obtained with SDO data. He reported that 36\% and 57\% of flares showing the double-decay from 2010 to 2011 and from 2011 to 2013, respectively, does not show features allowing us to classify them as ELP flares. In spite of fundamental differences between our and Wood's methodologies, there are also some similarities. The most striking is that in both studies the highest frequency occurs during low solar activity. The reason for this seems to be rather trivial. During low activity flares are not so frequent, thus their GOES light curves are not overlapped and every selecting criterion works perfectly. Keeping in mind that during the higher activity the probability of occurrence of active regions having complicated magnetic structure with loops of different lengths is even higher, the contribution of 9 to 13\% can be treated as a lower limit of flare hybrids. It is interesting that the values 20 to 30\% given by \inlinecite{woo14} for low activity phase, after adopting his 43\%-validation obtained for events from 2011 to 2013, decrease to 9 to 13\%. \inlinecite{woo11} noticed that ELP flares show a tendency to occur within the same active regions. For example, their Table~2 informs that among from the 22 ELP flares, six events occurred in NOAA AR 11069 and another six in NOAA AR 11121. We did not complete the list of flare hybrids including the active region identification, but during the investigation of some active regions, within which a flare hybrid occurred, we found that the same magnetic configuration was flaring several times. For example, the flare hybrid described by \inlinecite{t+s14} was preceded by three other flare hybrids that occurred in the same active region NOAA AR 9628. \section{Magnetic Configuration}\label{conf} All the available observations of flare hybrids strongly suggest the existence of two sets of magnetically related loop systems. This means a multipolar magnetic configuration, in which magnetic reconnection plays a crucial role in energy release and in shaping a new configuration, that becomes more potential. In previous studies the following particular configurations have been proposed: a classical quadrupolar topology based on breakout reconnection \cite{hoc12}, an asymmetric quadrupolar topology with a sigmoidal core \cite{liu13}, a multi-step reconnection in a multipolar topology \cite{dai13}, and a fan-spine topology \cite{sun13}. Usually we conclude about the reconnection indirectly by observing the emission of non-thermal electrons in HXRs and the emission of the multithermal plasma in SXRs and EUV. Sometimes eruptions can be treated as a signature of the reconnection and expanding loops can even initiate a following reconnection with a higher magnetic system \cite{su12,dai13}. Occasionally, it is possible to identify the loops that are the product of reconnection, {\it e.g.} see Figure~4 in \inlinecite{sun13}\hspace*{-1.5mm}. \inlinecite{tom13} reported a unique flare hybrid (SOL2001-10-02T17:31, C4.7), in which the reconnection occurred between a new emerging flux and an overlying coronal field. In SXR images recorded by the SXT the whole process is seen very well, starting from a fast expansion of emerging loops and their evident deformation in the vicinity of the reconnection site, followed by vigorous plasma motions inside the reconfiguring loops and the formation of a new system of loops. The emerging flux model \cite{hey77}, in which subphotospheric magnetic fields emerge due to buoyancy within an already existing active region and meet overlying coronal magnetic fields easily explains the main characteristics of flare hybrids. Continuous emergence of a new flux under a stable magnetic environment can produce homologous flares occurring in the same location with similar morphologies. The examples mentioned in Section~\ref{freq} are likely the illustration of this process explaining the tendency of flare hybrids to occur in the same active region. \section{Conclusions and Future Prospects} \label{conc} Our intention is to recall the forgotten term introduced by the late Professor Zden$\check{\rm{e}}$k $\check{\rm{S}}$vestka many years ago. His experience and intuition suggested the importance of flare hybrids that show a complex evolution in which their appearance changes completely. Events like these warn us against general conclusions formulated following a limited set of observations concerning, for example, an instant of time. The conclusion can be wrong even though in agreement with the available data. A closer insight needs sometimes the detailed study of the evolution of the active region in which the investigated event occurred. For the last 25 years plenty of new data has been obtained by successive solar satellites. On this basis, one can easily recognize the following typical observational features of flare hybrids: (1) separate systems of loops seen in EUV and SXR images, (2) double-peaked or strongly asymmetrical light curves in these wavelengths, (3) multiperiodicity of pulses recorded in HXRs. Now it is possible to give comprehensive answers concerning the general questions asked by \inlinecite{sve89}\hspace*{-1.5mm}. The conclusive condition for the occurrence of a flare hybrid seems to be the reconnection between two systems of magnetic loops, a system 1 and a system 2, having smaller and larger lengths, respectively. This means that the necessary condition for the occurrence is a multipolar magnetic configuration. The process can be initiated, for example, by a new magnetic flux (system 1) emerging within the already existing active region (system 2). The reconnection triggers energy release and the chromospheric evaporation which fills both systems with plasma initiating intense SXR and EUV emissions. The system 1 is quickly cooled, due to large radiative and conductive losses, completing phase 1. Further evolution of flare hybrids (phase 2) is connected with the evolution of the system 2. The prolonged SXR and EUV emissions in this system might be the effect of the long timescale of plasma cooling due to less radiative and conductive losses. However, in some events proofs of additional energy release in the system 2 are also given. A new reconnection site can be somehow connected with eruptions observed in phase 1. The amount of energy released in the system 2 during phase 2 establishes the maximum temperature of the plasma and in this way accessibility of observations in different SXR and EUV filters. It is easier to recognize typical features of flare hybrids, when differences between lengths of the interacting systems of loops are larger. However, even very different systems do not always cause the clear features of these events. Therefore, the estimations of frequency of flare hybrids presented in Section~\ref{freq} should be treated as a lower limit of the actual value. Flares are important for space weather due to the production of photons that are energetic enough to enhance quickly the ionization in Earth's upper atmosphere. Integration over wavelengths shows the energetic importance of the EUV flux. Therefore, the prolonged EUV emission makes the flare hybrids extremely geoeffective due to the prolonged impact on the Earth's atmosphere. For this reason, new methods introducing the early warning of occurrence of these events are welcome. Further complex investigation including X-ray and EUV observations should verify, for example, the usefulness of the multiperiodicity in HXRs for prediction of the EUV late maximum. \begin{acks} The {\sl Yohkoh} satellite is a project of the Institute of Space and Astronautical Science of Japan. We are very thankful to the referee for important comments which helped to improve this paper. We acknowledge financial support from the Polish National Science Centre grant 2011/03/B/ST9/00104. \end{acks}
{ "timestamp": "2015-04-14T02:14:23", "yymm": "1504", "arxiv_id": "1504.03165", "language": "en", "url": "https://arxiv.org/abs/1504.03165" }
\section{Introduction} The pattern of relative abundances of nuclei in the cosmic radiation (CR) is roughly similar to the one of the solar system material, with some notable exceptions: fragile nuclei (with low binding energies) such as ${}^2$H or Li-Be-B are over-represented in CR. This CR component is usually interpreted as the result of production by spallation of heavier species during the propagation of primary cosmic rays---whose injected abundance is assumed to closely trace that of the solar system ---in the interstellar medium. The ratios of these secondary to primary fluxes have long been recognised as a tool for constraining CR propagation parameters, for some review see for instance~\citet{Maurin:2002ua} and~\citet{Strong:2007nh}. The boron-to-carbon ratio, or B/C, represents the most notable example among them. The key constraints on the diffusion parameters are inferred by its measurement, with the corresponding confidence levels (see for instance~\citet{Maurin:2001sj}) widely used as benchmarks. It has also been recognised that datasets available one decade ago were still insufficient for a satisfactory measurement of these parameters~\citep{Maurin:2001sj}. The current decade is undergoing a major shift, however, with experiments such as PAMELA\citepads{2014PhR...544..323A} and most notably AMS-02 (\url{http://www.ams02.org}), which are characterised by significantly increased precision and better control of systematics. The current and forthcoming availability of high-quality data prompts the question of how best to exploit them to extract meaningful (astro)physical information. This new situation demands reassessing theoretical uncertainties, which will probably be the limiting factor in the parameter extraction accuracy. As a preliminary work, preceding the actual data analysis, we revisit this issue to determine the relative importance of various effects: some have already been considered in the past, some were apparently never quantified. We also found that the main theoretical biases or errors are related to phenomena that can be described in a very simple 1D diffusive model. We thus adopt it as a benchmark for our description, reporting the key formulae that thus have a pedagogical usefulness, too. In fact, we focus on determining the diffusion coefficient, which we parameterise as conventionally in the literature (see for example \citetads{1997A&A...321..434P}): \begin{equation} D \left({\cal R} \right) = D_0 \, \beta \left(\frac{{\cal R}}{{\cal R}_0=1\,\text{GV}} \right)^\delta , \label{DofE} \end{equation} where $D_0$ and $\delta$ are determined by the level and power-spectrum of hydromagnetic turbulences, ${\cal R}$ is the rigidity, and the velocity $\beta=v/c\simeq 1$ in the high-energy regime of interest here (kinetic energy/nucleon $\gtrsim 10\,$GeV/nuc). In fact, at lower energies numerous effects, in principle of similar magnitude, are present, such as convective winds, reacceleration, and collisional losses. At high energy, there is a common consensus that only diffusion and source-related effects are important. We focus on the high-energy region since it is the cleanest to extract diffusion parameters, that is the least subject to parameter degeneracies. While adding lower-energy data can lead to better constraints from a statistical point of view, the model dependence cannot but grow. Since our purpose is to compare theoretical with statistical uncertainties from observations, our choice is thus conservative: in a global analysis, the weight of the former with respect to the latter is probably larger. To deal with a realistic level of statistical errors of the data that will be available for the forthcoming analyses, we base our analyses on preliminary AMS-02 data of the B/C ratio~\citep{2013..ICRC}. This paper is organised as follows. In Sect.~\ref{1dmodel1} we recall a simple 1D diffusion model providing our benchmark for the following analyses. This model certainly has pedagogical value, since it allows encoding the main dependences of the B/C ratio on input as well as astrophysical parameters in simple analytical formulae . At the same time, it provides a realistic description of the data, at least if one limits the analysis to sufficiently high energies. Relevant formulae are introduced in Sect.~\ref{formulae1}, while in subsection~\ref{fitting} we recall the main statistical tools used for the analysis. In Sect.~\ref{PrimaryBoron} we describe the main degeneracy affecting the analysis: the one with possible injection of boron nuclei at the sources. The next most important source of error is associated to cross-section uncertainties, to which we devote Sect.~\ref{sigmas}. In Sect.~\ref{propmodeling} we discuss relatively minor effects linked to modelling of the geometry of the diffusion volume, source distribution, or the presence of convective winds. In Sect.~\ref{conclusions} we report our conclusions. \section{B/C fit with a 1D model} \label{1dmodel1} \subsection{1D diffusion model}\label{formulae1} \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{Fig1.png} \caption{Sketch of the 1D slab model of the Galaxy, with matter homogeneously distributed inside an infinite plane of thickness $2h$ sandwiched between two thick diffusive layers of thickness $2H$.} \label{fig:1} \end{figure} The simplest approach to model the transport of cosmic-ray nuclei inside the Galaxy is to assume that their production is confined inside an infinite plane of thickness $2h$, that is sandwiched inside an infinite diffusion volume of thickness $2H$, symmetric above and below the plane. The former region stands for the Galactic disk, which comprises the gas and the massive stars of the Milky Way, whereas the latter domain represents its magnetic halo. A sketch of this model is given in Fig.~\ref{fig:1}. The boundary conditions fix the density of cosmic rays at the halo edges $z = \pm H$ to zero, while the condition $h \ll H$ (in practice, $h$ is almost two orders of magnitude smaller than $H$) allows us to model the Galactic matter distribution as an infinitely thin disk whose vertical distribution is accounted for by the Dirac function $2 h \delta(z)$. Our focus on energies above 10\,GeV/nuc allows us to neglect continuous (ionisation and Coulomb) energy losses, electronic capture, and reacceleration. These subleading effects cannot be truly considered as theoretical uncertainties, since they can be introduced by a suitable upgrade of the model. However, taking them into account at this stage would imply a significant loss in simplicity and transparency. The well-known propagation equation for the (isotropic part of the gyrophase-averaged) phase space density $\psi_a$ of a stable nucleus $a$, with charge (atomic number) $Z_a$, expressed in units of {particles cm$^{-3}$ (GeV/nuc)$^{-1}$}, takes the form \begin{align} \frac{\partial \psi_a}{\partial t} - \frac{\partial}{\partial z} \left( D \frac{\partial \psi_a}{\partial z} \right) = & \,2 h \delta(z) \cdot q_{a} + \delta(z) \sum_{Z_b \geqslant Z_a}^{Z_{max}} \sigma_{b \to a} \cdot v \frac{\mu}{m_{\rm ISM}} \psi_b \nonumber \\&- \delta(z) \cdot \sigma_{a} \cdot v \frac{\mu}{m_{\rm ISM}} \psi_a , \label{eqprop2} \end{align} where the spatial diffusion coefficient $D$ has been defined in Eq.~(\ref{DofE}). The cross-section for the production of the species $a$ from the species $b$ through its interactions with the interstellar medium (ISM) is denoted by $\sigma_{b \to a}$, whereas $\sigma_{a}$ is the total inelastic interaction (destruction) cross-section of the species $a$ with the ISM. The fragmentation of the nucleus $b$ takes place at constant energy per nucleon. $v$ hence stands for the velocities of both parent ($b$) and child ($a$) nuclei. The surface density of the Galactic disk is denoted by $\mu$, while $m_\text{ISM}$ is the average mass of the atomic gas that it contains. The values of the production cross-sections $\sigma_{b \to a}$ were calculated with the most recent formulae from \citetads{2003ApJS..144..153W}. The destruction cross-sections $\sigma_{a}$ were computed by the semi-empirical formulae of \citetads{1997lrc..reptQ....T, 1999NIMPB.155..349T}. The high-energy shapes of both cross-sections exhibit a plateau that allows one to approximate them as constants in this energy range. \vskip 0.1cm Solving the propagation Eq.~(\ref{eqprop2}) in the steady-state regime allows expressing the flux ${\cal J}_a \equiv ({v}/{4 \pi}) \, \psi_a$ of the stable nucleus~$a$ inside the Galactic disk ($z=0$) as \begin{align} {\cal J}_a (E_k) &= \left\{ {Q_a + \sum_{Z_b \geqslant Z_a}^{Z_{\text{max}}}} \sigma_{b \to a} \cdot {\cal J}_b \right\} / \left\{ {\sigma^{\text{diff}} + \sigma_{a}} \right\}, \label{eq:flux} \\ \nonumber \text{where} \quad \sigma^{\text{diff}} &= \frac{2 D \, m_{\text{ISM}}}{\mu v H}. \end{align} The fluxes ${\cal J}_b$ of the parent species are also taken at $z=0$. Here $Q_a$, which stands for the source term, is homogeneous to a flux times a surface and is expressed in units of {particles (GeV/nuc)$^{-1}$ s$^{-1}$ sr$^{-1}$}. It is related to $q_{a}$ through \begin{equation} Q_a = \frac{1}{4 \pi} \cdot \frac{q_a}{n_{\text{ISM}}} \equiv N_a \, \left( \frac{\cal R}{1 \, \text{GV}} \right)^{\alpha} , \end{equation} where $N_a$ is a normalisation constant that depends on the isotope $a$. We assumed an injection spectrum with the same spectral index $\alpha$ for all nuclei. The value of $N_a$ should be adjusted by fitting the corresponding flux ${\cal J}_a$ to the measurements performed at Earth. However, these scarcely contain information on the isotopic composition of cosmic rays. Nuclei with the same charge $Z$ are in general collected together, irrespective of their mass. More isotopic observations would be necessary to set the values of the coefficients $N_a$ for the various isotopes $a$ of the same element. \vskip 0.1cm In our analysis, we assumed solar system values\citepads{2003ApJ...591.1220L} for the isotopic fractions $f_a$ of the stable species $a$ that were injected at the sources. We then proceeded by computing the flux ${\cal J}_Z$ of each element $Z$ at Earth. We fixed the normalisation $N_Z$ for the total injection of all stable isotopes of the same charge $Z$ by fitting the measured flux of that element. The normalisation entering in the calculation of $Q_a$ is given by $N_a = f_a \cdot N_Z$, the sum of the fractions $f_a$ corresponding to the same element $Z$ amounting to 1. The actual isotopic composition of the material accelerated at the source might be different from the solar system one, as is the case for neon\citepads{2008NewAR..52..427B}. Our method might introduce a theoretical bias in CR element flux calculations. However, our main focus here is to extract the propagation parameters thanks to the different sensitivities between primary carbon and (a priori) secondary boron. The only isotopes that come into play in the B/C ratio are the stable nuclei $^{12}\text{C}$, $^{13}\text{C}$, $^{10}\text{B}$, and $^{11}\text{B}$, unstable $^{14}\text{C}$ plays a very minor role. The isotopes of either carbon or boron have similar rigidities and destruction cross-sections. Varying the isotopic composition of carbon (and of boron, should it be partially primary) does not affect the ratio calculation. Futhermore, secondary boron is mainly produced by the fragmentation of one particular isotope of each heavier element. For example, the primary component of $^{12}\text{C}$ is two orders of magnitude larger than that of $^{13}\text{C}$. This reduces the differences arising from the boron production cross-sections. Althought most of the isotopes at stake are stable, radioactive nuclei were also taken into account in the calculation, and we obtained more complicated expressions for the fluxes, which are not displayed here for brevity. They are reported for instance in Appendix A of~\cite{Putze:2010zn}. By defining the total flux of a nucleus of charge $Z$ as the sum over all its isotopes $a$ \begin{equation} {\cal J}_Z = \sum_{\text{isotopes $a$} \atop \text{of same }Z} {\cal J}_{a} , \end{equation} and considering only the dominant contribution from stable nuclei, the B/C flux ratio can be written as \begin{equation} \frac{{\cal J}_\text{B} (E_k)}{{\cal J}_\text{C} (E_k)} = \left\{ \frac{Q_\text{B}}{{\cal J}_\text{C}} + \sigma_{\text{C} \to \text{B}} + \sum_{ Z_b > Z_{\text{C}}}^{ Z_\text{max}} \sigma_{b \to \text{B}} \cdot \frac{{\cal J}_{b}}{{\cal J}_\text{C}} \right\} / \left\{ {\sigma^{\text{diff}} + \sigma_\text{B}} \right\} . \label{eq:b_to_c_primary_B} \end{equation} If we assume that there are no primary boron sources, that is, $Q_\text{B} = 0$, this expression simplifies into \begin{equation} \frac{{\cal J}_\text{B} (E_k)}{{\cal J}_\text{C} (E_k)} = \frac{\sigma_{\text{C} \to \text{B}}}{\sigma^{\text{diff}} + \sigma_{B}} + \sum_{Z_b > Z_\text{C}}^{Z_\text{max}} \frac{\sigma_{b \to \text{B}}}{\sigma^{\text{diff}} + \sigma_\text{B}} \cdot \frac{{\cal J}_{b}}{{\cal J}_\text{C}}. \label{eq:b_to_c} \end{equation} The impact of relaxing this hypothesis is explored in Sect.~\ref{PrimaryBoron} where the effect of a non-vanishing value for $Q_\text{B}$ is considered. \subsection{Fitting procedure and benchmark values for this study}\label{fitting} We used the AMS-02 recent release of the B/C ratio~\citep{2013..ICRC} to study the impact of systematics on the propagation parameters. As explained above, we limited ourselves to the high-energy sub-sample, above 10 GeV/nuc. The set of Eqs.~(\ref{eq:flux}) is of triangular form. The heaviest element considered in the network, which in our case is $^{56}$Fe, can only suffer destruction. No other heavier species $b$ enters in the determination of its flux ${\cal J}_a$, which hence is proportional to the injection term $Q_a$. Once solved for it, the algebraic relation yields the solution for the lighter nuclei, down to boron. We evaluated the cascade down to beryllium to take into account its radioactive decay into boron. \vskip 0.1cm The primary purpose of our analysis is to determine the diffusion parameters $D_0$ and $\delta$ from the B/C flux ratio ${\cal F} \equiv {{\cal J}_\text{B}}/{{\cal J}_\text{C}}$. Another parameter of the model is the magnetic halo thickness $H$. As shown in Eq.~(\ref{eq:flux}), $D_{0}$ and $H$ are completely degenerate when only considering stable nuclei, which provide the bulk of cosmic rays. In the following, $H$ is therefore fixed at $4$\,kpc for simplicity, although it should be kept in mind that, to a large extent, variations in $D_0$ can be traded for variations in $H$. Finally, the injection spectral index $\alpha$ also enters in the calculation of the B/C ratio through the source terms $Q_a$. How strong its effect is on the best-fit diffusion parameters $D_0$ and $\delta$ is one of the questions we treat in this section. To this purpose, we carried out a chi-square ($\chi^2$) analysis of the B/C observations and minimised the function \begin{equation} \chi^{2}_{\text{B/C}} = \sum_i \left\{ \frac{{\cal F}^{\text{exp}}_i - {\cal F}^{\text{th}}_i \left(\alpha , \delta , D_0 \right)}{\sigma_{i}} \right\}^{2} , \end{equation} where the sum runs over the data points $i$ whose kinetic energies per nucleon are $E_{k,i}$, while ${\cal F}^{\rm exp}_{i}$ and $\sigma_{i}$ stand for the central values and errors of the measurements. The theoretical expectations ${\cal F}^{\rm th}_{i}$ also depend on the normalisation constants $N_a$, which come into play in the source terms $Q_a$ of the cascade relations~(\ref{eq:flux}). To determine them, we first fixed the spectral index $\alpha$ and the diffusion parameters $D_0$ and $\delta$. We then carried out an independent $\chi^2$-based fit on the fluxes ${\cal J}_Z$ of the various elements that belong to the chain that reaches from iron to beryllium. The measured fluxes are borrowed from the cosmic-ray database of\citetads{2014A&A...569A..32M} from which we selected the points above 10 GeV/nuc. As explained above, this method yields the constants $N_Z$ and eventually the values of $N_a$ once the solar system isotopic fractions $f_a$ are taken into account. The overall procedure amounts to profile over the normalisation constants $N_a$ to derive $\chi^{2}_{\rm B/C}$ as a function of $\alpha$, $\delta$ and $D_0$. Minimisations were performed by MINUIT (\url{http://www.cern.ch/minuit}), a package interfaced in the ROOT programme (\url{https://root.cern.ch}). \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{Fig2.png} \caption{Relative variations of the best-fit propagation parameters (compared to the benchmark model of Table~\ref{tab:benchmark}) with respect to the injection spectral index $\alpha$.} \label{fig:2} \end{figure} \vskip 0.1cm To check the accuracy and robustness of our fitting procedure, a preliminary test is in order. A commonly accepted notion is that the B/C ratio does not depend, to leading order, on the spectral index $\alpha$. There is indeed no dependence on $\alpha$ in the cross-section ratios of Eq.~(\ref{eq:b_to_c}) in the pure diffusive regime where $\sigma_\text{B} \ll \sigma^{\rm diff}$. We have checked numerically that this behaviour holds by calculating the B/C best-fit values of the diffusion parameters at fixed spectral index $\alpha$. The results are reported in Fig.~\ref{fig:2}, where $D_0$ and $\delta$ are plotted, with their confidence limits, as a function of $\alpha$. We scanned over the physical range that extends from $-2.5$ to $-2$ and observed that the relative variations of $D_0$ and $\delta$ are 5\% and 1\%, respectively. The blue ($D_0$) and red ($\delta$) bands are almost horizontal. An anti-correlation between $D_0$ and $\delta$ is marginally noticeable and can be understood by the interplay of these parameters inside the diffusion coefficient $D$, the only relevant parameter that the B/C fit probes. We attribute the small variation of $D_0$ with $\alpha$ to the different sensitivities of the normalisation constants $N_Z$ of nitrogen and oxygen to the low-energy data points as compared to carbon. This could result in fluctuations of the ${N_\text{N}}/{N_\text{C}}$ and ${N_\text{O}}/{N_\text{C}}$ ratios with respect to the actual values. In any case, the extremely small dependence of the B/C ratio on $\alpha$ confirms the naive expectations and suggests that it is useless and simply impractical to keep $\alpha$ as a free parameter. \vskip 0.1cm Nonetheless, there is a particular value of the injection index that best fits the fluxes of the elements $Z$ that come into play in the cascade from iron to beryllium. By minimising the $\chi^2$-function \begin{equation} \chi^2_{\cal J} = \sum_{Z \geqslant Z_\text{Be}}^{Z_\text{Fe}} \sum_i \left\{ \frac{{\cal J}^{\rm exp}_{Z,i}(E_{k,i} ) - {\cal J}^{\rm th}_{Z,i}(E_{k,i} )}{\sigma_{Z,i}} \right\}^{2} , \end{equation} we find $\alpha=-2.34$ as our benchmark value. Applying then our B/C analysis yields the propagation parameters $D_{0}$ and $\delta$ of the reference model of Table~\ref{tab:benchmark} which we used for the following analyses. The corresponding B/C ratio is plotted in Fig.~\ref{fig:fig:3} as a function of kinetic energy per nucleon and compared to the preliminary AMS-02 measurements~\citep{2013..ICRC}. In what follows, we study how $D_{0}$ and $\delta$ are affected by a few effects under scrutiny and gauge the magnitude of their changes with respect to the reference model. We could have decided to keep the injection index $\alpha$ equal to its fiducial value of -2.34, but we preferred to fix the spectral index $\gamma=\alpha-\delta=-2.78$ of the high-energy fluxes ${\cal J}_Z$ at Earth. Keeping $\alpha$ fixed would have little effect on the B/C ratio, but would degrade the goodness of the fits on absolute fluxes. \begin{table}[!h] \begin{center} \begin{tabular}{|l|c|} \hline \multicolumn{2}{|c|}{Reference parameter values} \\ \hline\hline $\alpha$ & $-2.34$ \\ $D_{0}$ [kpc$^2$/Myr] & $(5.8 \pm 0.7) \cdot 10^{-2}$ \\ $\delta$ & $0.44 \pm 0.03$ \\ ${\chi^{2}_{\rm B/C}}/{\rm dof}$ & $5.4/8 \approx 0.68$ \\ \hline $\gamma=\alpha-\delta$ (fixed) & $-2.78$ \\ \hline \end{tabular} \vskip 0.2cm \caption{Benchmark best-fit parameters of the 1D/slab model, with respect to which comparisons are subsequently made.} \label{tab:benchmark} \end{center} \end{table} \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{Fig3.png} \caption{ Preliminary AMS-02 measurements of the B/C ratio~\citep{2013..ICRC} are plotted as a function of kinetic energy per nucleon. The theoretical prediction of the 1D/slab reference model of Tab.~\ref{tab:benchmark} is also featured for comparison.} \label{fig:fig:3} \end{figure} \vskip 0.1cm Another crucial test of our fitting procedure is to check how the results depend on the low-energy cut-off $E_{\rm cut}$ above which we carried out our analysis. We set the flux spectral index $\gamma$ to its benchmark value of Table~\ref{tab:benchmark} and determined the B/C best-fit values of the diffusion parameters as a function of $E_{\rm cut}$, which was varied from 5 to 30\,GeV/nuc. The results are plotted in Fig.~\ref{fig:4} with the $1{\sigma}$ and $2{\sigma}$ uncertainty bands. As expected, the statistical errors increase when moving from a low $E_{\rm cut}$ to a higher value. That is why the reduced $\chi^2$ (dashed line) decreases steadily as the cut-off energy is increased. The higher the cosmic-ray energy, the fainter the flux and the scarcer the events in the detector. The widths of the blue ($D_0$) and red ($\delta$) bands at $E_{\rm cut}=10$\,GeV/nuc, however, are not significantly larger than for a cut-off energy of 5~GeV/nuc. This suggests that our estimates for the statistical errors are slightly pessimistic, which is acceptable and consistent with our purpose. \vskip 0.1cm The other trend that we observe in Fig.~\ref{fig:4} is a shift in the preferred value of $\delta$ to increasingly lower values as we limit the analysis to increasingly higher energies. This is no limitation of our procedure. On the contrary, it is a real feature that the data exhibit, as is clear in Fig.~\ref{fig:fig:3}, where the tail of the B/C points does look flatter above 50\,GeV/nuc. The anti-correlation between $\delta$ and $D_0$ that we observe in Fig.~\ref{fig:4} has already been explained by the interplay of these two parameters inside the diffusion coefficient $D$, to which the B/C ratio is sensitive. The increase of $D_0$ is then generic and does not signal any new effect. At that stage, the statistical uncertainties are still of the same order as the systematic uncertainties generated by using different energy cuts. Should the decrease of $\delta$ with $E_{\rm cut}$ be confirmed with higher statistics, some intrinsic explanation might be necessary for the failure of a power-law fit. See for instance Sect.~\ref{PrimaryBoron} for a possible explanation. \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{Fig4.png} \caption{Relative variations of the best-fit propagation parameters, as compared to the benchmark model of Tab.~\ref{tab:benchmark}, with respect to the low-energy cut-off $E_{\rm cut}$ above which we carry out the B/C analysis. } \label{fig:4} \end{figure} \section{Primary boron?} \label{PrimaryBoron} Typical fits of the B/C ratio are based on the assumption that no boron is accelerated at the source, so that the term proportional to $Q_{\text B}$ at the right-hand side of Eq.~(\ref{eq:b_to_c_primary_B}) vanishes. However, this is just an assumption that need to be tested empirically. It is crucially linked to the hypothesis that the acceleration time is much shoter than the propagation time within the magnetic halo and that it occurs in a low-density environment. On the other hand, typical astrophysical accelerators such as supernova remnants might have the capability to accelerate up to TeV energies for of about $t_{\rm life} \sim 10^{5}$ years in an interstellar medium with $n_{\rm ISM} \sim 1$~cm$^{-3}$, or greater when surrounded by denser circumstellar material. The corresponding surface density $n_{\rm ISM} \, c \, t_{\rm life} \sim 10^{23}$\,cm$^{-2}$ easily leads to percent-level probabilities for nuclei to undergo spallation in the sources. A factor of only a few times higher than this would certainly have dramatic consequences on the information inferred from secondary-to-primary ratios. More elaborate versions of this idea and related phenomenology have also been detailed as a possible explanation of the hard spectrum of secondary positron data~\citep{Blasi:2009hv,Blasi:2009bd,Mertsch:2009ph}, which was recently compared with the AMS-02 data\citepads{2014PhRvD..90f1301M}. \begin{figure}[!htb] \includegraphics[width=0.5\textwidth]{Fig5a.png} \includegraphics[width=0.5\textwidth]{Fig5b.png} \caption{ Left panel: variations of the best-fit propagation parameters $D_0$ (blue) and $\delta$ (red) relative to the benchmark values of Table~\ref{tab:benchmark}, as a function of the primary boron-to-carbon injection ratio. The reference model corresponds to the conventional no boron hypothesis for which ${N_{\text B}}/{N_{\text C}}$ vanishes. Right panel: the theoretical value of the B/C ratio at 214\,GeV/nuc (solid red curve) is plotted as a function of the primary boron-to-carbon injection ratio. The dashed black curve indicates the goodness of the B/C fit. As long as ${N_{\text B}}/{N_{\text C}}$ does not exceed {13\%}, the theoretical B/C ratio is within $2\sigma$ from the AMS-02 measurement (dashed-dotted green curve).} \label{fig:5} \end{figure} Apparently little attention has been paid to the bias introduced by the ansatz $Q_{\text B}=0$. To the best of our knowledge, we quantify it here for the first time. As can be inferred from Eq.~(\ref{eq:b_to_c_primary_B}), in the presence of a primary source $Q_\text{B}$, the B/C ratio exhibits a plateau as soon as the cross-section ratio ${\sigma_{{\text C} \to {\text B}}}/({\sigma^{\rm diff} + \sigma_{\text B}})$ becomes negligible with respect to the primary abundances ratio ${N_{\text B}}/{N_{\text C}}$. This happens at sufficiently high energy since $\sigma^{\rm diff}$ increases with the diffusion coefficient $D$. The height of this high-energy B/C plateau is approximately given by the value of ${N_{\text B}}/{N_{\text C}}$. In the presence of this behaviour, the spectral index $\delta$ must increase to keep fitting the data at low energy, that is, here around 10\,GeV/nuc. This also implies that $D_0$ decreases with ${N_{\text B}}/{N_{\text C}}$ as a result of the above-mentioned anti-correlation between the diffusion parameters. \vskip 0.1cm We have thus scanned the boron-to-carbon ratio at the source and studied the variations of the best-fit values of $D_0$ and $\delta$ with respect to the reference model of Table~\ref{tab:benchmark}. Our results are illustrated in Fig.~\ref{fig:5}, where the left panel features the confidence levels for $\delta$ (red) and $D_0$ (blue) as a function of the ${N_{\text B}}/{N_{\text C}}$ ratio. The B/C fit is particularly sensitive to the last few AMS-02 points, notably the penultimate data point, around 214 GeV/nuc, for which the B/C ratio is found to be $\sim 9$\%. In the right panel, the theoretical expectation for that point is plotted (solid red curve) as a function of the primary abundances ratio, while the dashed black curve indicates how the goodness of fit varies. It is interesting to note that a minor preference is shown for a non-vanishing fraction of primary boron, around 8\%, due to the marginal preference for a flattening of the ratio already mentioned in the previous section. The ${N_{\text B}}/{N_{\text C}}$ ratio is only loosely constrained to be below 13\%. Such a loose constraint would nominally mean that a spectral index $\delta$ more than three times larger than its benchmark value would be allowed, with a coefficient $D_0$ one order of magnitude smaller than indicated in Table~\ref{tab:benchmark}. In fact, such changes are so extreme that they would clash with other phenomenological or theoretical constraints and should probably be considered as unphysical. A spectral index $\delta$ in excess of 0.9, corresponding to a relative increase of 100\% with respect to our benchmark model, is already so difficult to reconcile with the power-law spectrum of nuclei and the present acceleration schemes that it would probably be excluded. The message is quite remarkable however. The degeneracy of the diffusion parameters with a possible admixture of primary boron is so strong that it dramatically degrades our capability of determining the best-fit values of $D_0$ and $\delta$, and beyond them the properties of turbulence, unless other priors are imposed. \section{Cross-section modelling}\label{sigmas} The outcome of cosmic-ray propagation strongly depends on the values of the nuclear production $\sigma_{b \to a}$ and destruction $\sigma_{a}$ cross-sections with the ISM species, mainly protons and helium nuclei. Some of these are measured, albeit in a limited dynamical range, while a significant number of them rely on relatively old semi-empirical formulas, calibrated to the few available data points. In this section, we discuss how parametric changes in these inputs reflect on the B/C ratio. The effect of cross-section systematics was already studied by\citetads{2010A&A...516A..67M}, who parameterised it in terms of a systematic shift with respect to the energy. Since we consider here only the high-energy limit, we simply allowed for a rescaling of the cross-sections. However, we distinguished between two cases: a correlated ($\nearrow \nearrow$) or anti-correlated ($\nearrow \searrow$) rescaling between the production $\sigma_{b \to a}$ and the destruction $\sigma_{a}$ cross-sections. These in fact are not affected by the same uncertainties. It is often the case that the latter are known to a better precision then the former since they rely on a richer set of data. A priori, it is conceivable that several relevant production cross-sections might be varied independently. It is worth noting, however, that only a few nuclei -- notably oxygen and carbon ($\sim 80\%$), and to a lesser extent nitrogen ($\sim 7\%$) -- are in fact responsible for most of the produced boron, as shown in Fig.~\ref{fig:6}. \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{Fig6.png} \caption{Contribution of the various primary nuclear species to the secondary boron flux at $10$~GeV/nuc, as estimated with the semi-empirical code by Webber 03.} \label{fig:6} \end{figure} First, we need to assess the reasonable range over which the various cross-sections of the problem are expected to vary. For this, we compared our reference models for the destruction and production cross-sections with those used in popular numerical propagation codes such as GALPROP\citepads{2001AdSpR..27..717S} and DRAGON\citepads{2008AIPC.1085..380E}\footnote{Updated version of these two codes can be found at:\url{https://sourceforge.net/projects/galprop} and \url{http://www.dragonproject.org/Home.html},respectively.}. The database implemented these two codes traces back to the GALPROP team work and is based on a number of references including -- but not limited to -- Nuclear Data Sheets and Los Alamos database\citepads{1998nucl.th..12071M} (see\citetads{2001ICRC....5.1836M} and\citetads{2003ICRC....4.1969M} for a more complete list of references). In this work we compare the values given directly by the default cross-section parameterisations without any renormalisation (which can be implemented however). \vskip 0.1cm In the case of the destruction cross-sections $\sigma_{a}$, we compared our reference model \citepads{1997lrc..reptQ....T} with the parameterisations of \citet{Barashenkov:5725}, \citetads{1983ApJS...51..271L} and \citetads{1996PhRvC..54.1329W}. The last case only applies to elements with $Z>5$, while the \citetads{1983ApJS...51..271L} modelling is conserved for lighter nuclei. Figure~\ref{fig:7} shows the relative differences between our reference model and the three other semi-empirical approaches and allows deriving an indicative lower limit on the systematic uncertainties for the destruction cross-sections of roughly 2 to 10\% for the B/C ratio. The systematic difference is at the 3\% level for the channels (CNO) that contribute most to secondary boron production. The difference to our reference model is stronger for larger charges ($Z > 10$), but these nuclei have a negligible contribution to the B/C ratio. \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{Fig7.png} \caption{Relative differences between our reference model\citepads{1997lrc..reptQ....T} for the destruction cross-sections $\sigma_{a}$ and the other parameterisations by Letaw 1983\citepads{1983ApJS...51..271L}, Wellish 1996\citepads{1996PhRvC..54.1329W} and B\&P 1994 \citep{Barashenkov:5725} are displayed as a function of the nucleus charge, at an energy of 10\,GeV/nuc. Each bin is characterised by a given charge $Z$ and encodes the arithmetic mean over the corresponding isotopes. Only the elements involved in the cascade from iron to beryllium are displayed.}\label{fig:7} \end{figure} \vskip 0.1cm \begin{figure}[!htp] \includegraphics[width=0.5\textwidth]{Fig8a.png} \includegraphics[width=0.5\textwidth]{Fig8b.png} \includegraphics[width=0.5\textwidth]{Fig8c.png} \caption{ 2D histograms feature the relative differences between various semi-empirical models currently used to calculate the production cross-sections $\sigma_{b \to a}$. Our reference model is \citetads{2003ApJS..144..153W} (Webber 03), and we compare it to the parameterisations from\citetads{1990PhRvC..41..566W} (Webber 93), \citetads{1998PhRvC..58.3539W} (Webber 98) and\citetads{1998ApJ...501..911S} (S\&T 00). The charges of the parent and child nuclei are given on the vertical and horizontal axes, respectively. The relative difference in each bin is given by the arithmetic mean over the various isotopes of each element. A detailed view provides the most important channels for the B/C ratio studies. For a fragmentation of $\Delta Z<4$, we also give the first and second moments of the uncertainty distributions.}\label{fig:8} \end{figure} For the production cross-sections $\sigma_{b \to a}$, one may chose between the semi-empirical approach proposed by\citetads{1998ApJ...501..911S}, subsequently revised in 2000 and called here S\&T 00, and the parameterisation provided by\citetads{1990PhRvC..41..566W} (hereafter Webber 93) and its updates of 1998\citepads{1998PhRvC..58.3539W} and 2003\citepads{2003ApJS..144..153W}. We selected the last set of values as our reference model, to which we have compared the other parameterisations to gauge the uncertainties that affect, on average, the values of $\sigma_{b \to a}$. The relative differences between Webber 93, Webber 98, and S\&T 00 with respect to Webber 03 are plotted in the form of the three histograms of Fig.~\ref{fig:8}. The charges of the parent and child nuclei are given on the vertical and horizontal axes. The most important reactions, whose cross-sections are higher, correspond to a change of charge $\Delta Z$ not in excess of 3 during the fragmentation process and are located close to the diagonals of the 2D-grids of Fig.~\ref{fig:8}. We first note that the Webber 93 and 98 production cross-sections are on average larger than the values of the Webber 03 reference model. Most of the pixels on the diagonals of the corresponding histograms are red, and we measured an excess $\mu$ on the reactions for which $\Delta Z < 4$ of 18\% and 9.7\% for Webber 93 and Webber 98 as compared to Webber 03. Furthermore, in both cases the dispersion of these differences is quite large and amounts to 31\% for Webber 93 and 30\% for Webber 98. A rapid comparison between S\&T~00 and Webber 03 would also leave the impression that in the former case, the reactions in the upper left corner of the histogram have cross-sections exceedingly larger than for the Webber 03 parameterisation. A close inspection along the diagonal indicates, on the contrary, that the S\&T 00 values for $\Delta Z < 4$ are on average 13\% higher than for the reference model, with a dispersion $\sigma$ of 28\% similar to the other cases. The main production channels of secondary boron are listed in Table~\ref{tab:sigma_prod_B} and are also displayed in the expanded views of the small square regions that sit in the lower left corners of the histograms of Fig.~\ref{fig:8}. The most relevant reactions involve the stable isotopes of carbon, nitrogen, and oxygen fragmenting into $^{10}$B and $^{11}$B, and are indicated in boldface in Table~\ref{tab:sigma_prod_B}. The largest contributor to secondary boron is $^{12}$C. The three semi-empirical models with which we compared our Webber 03 reference parameterisation tend to predict production cross-sections that are 15\% for S\&T 00 to 25\% for Webber 93 larger. In contrast, those models underpredict the spallation of $^{16}$O by 10\% in the case of Webber 93 and 98 to 18\% for S\&T 00. In the latter case, the production cross-section of $^{10}$B from $^{14}$N is 68\% larger than for Webber 03. But nitrogen only contributes $\sim$ 7\% of the secondary boron, and this has no significant impact. To summarise this discussion, the production cross-sections $\sigma_{b \to a}$ can be varied up or down by a factor of order 10-20\% with respect to Webber 03. \begin{table*} \centering \begin{tabular}{|c|c|c|c|c|} \hline Main production channels & Webber 03 & S\&T 00 & Webber 98 & Webber 93 \\ & reference model (RM) & rel. difference to RM & rel. difference to RM & rel. difference to RM \\ $\sigma_{\text{CNO $\to$ B}}$ at $10~\rm GeV/nuc$ & [mb] & [\%] & [\%] & [\%] \\ \hline \hline \boldmath $\sigma \left(^{12}_{6}\bf{C}\to\ ^{10}_{5}\bf{B} \right)$ & 14.0 & -2.14 & 21.8 & 25.4\\ \boldmath $\sigma \left(^{12}_{6}\bf{C}\to\ ^{11}_{5}\bf{B} \right)$ & 47.0 & 15.3 & 14.8 & 18.4\\ $\sigma \left(^{13}_{6}\text{C}\to\ ^{10}_{5}\text{B} \right)$ & 4.70& 92.0 & -2.06 & -0.03\\ $\sigma \left(^{13}_{6}\text{C}\to\ ^{11}_{5}\text{B} \right)$ & 40.0 & -20.6 & 2.21 & 4.20\\ \hline \boldmath $\sigma \left(^{14}_{7}\bf{N}\to\ ^{10}_{5}\bf{B} \right)$ & 9.90 & 68.1 & 0.14 & 1.01\\ \boldmath $\sigma \left(^{14}_{7}\bf{N}\to\ ^{11}_{5}\bf{B} \right)$ & 27.2 & 1.33 & -8.86 & -11.0\\ $\sigma \left(^{15}_{7}\text{N}\to\ ^{10}_{5}\text{B} \right)$ & 9.20 & -6.55 & -70.9 & -70.7\\ $\sigma \left(^{15}_{7}\text{N}\to\ ^{11}_{5}\text{B} \right)$ & 28.0 & -0.33 & -27.4 &-27.9 \\ \hline \boldmath $\sigma \left(^{16}_{8}\bf{O}\to\ ^{10}_{5}\bf{B} \right)$ & 10.7 & -18.0 & -7.67 & -8.85\\ \boldmath $\sigma \left(^{16}_{8}\bf{O}\to\ ^{11}_{5}\bf{B} \right)$ & 24.0 & 2.94 & -9.36 &-10.9\\ $\sigma \left(^{17}_{8}\text{O}\to\ ^{10}_{5}\text{B} \right)$ & 3.60 & 124 & 0.27 & -1.00\\ $\sigma \left(^{17}_{8}\text{O}\to\ ^{11}_{5}\text{B} \right)$ & 19.7 & 27.3 & 1.42 & -0.09 \\ $\sigma \left(^{18}_{8}\text{O}\to\ ^{10}_{5}\text{B} \right)$ & 0.70 & 545 & 4.43 & 4.48 \\ $\sigma \left(^{18}_{8}\text{O}\to\ ^{11}_{5}\text{B} \right)$ & 12.0 & 113 & 2.20 & 0.77 \\ \hline \end{tabular} \caption{ Comparison between different cross-section parameterisations for the main production channels of secondary boron. The reference model used in our calculations of the fluxes is adapted from\citetads{2003ApJS..144..153W} (Webber 03) and is compared to previous releases by\citetads{1990PhRvC..41..566W} (Webber 93) and\citetads{1998PhRvC..58.3539W} (Webber 98) as well as to the work from\citetads{1998ApJ...501..911S} (S\&T 00). The dominant production channels, which involve the stable isotopes of carbon, nitrogen, and oxygen, are listed in boldface.} \label{tab:sigma_prod_B} \end{table*} \vskip 0.1cm Varying the various production and destruction cross-sections has an effect on the calculation of the B/C ratio and thus affects the determination of the propagation parameters $D_{0}$ and $\delta$. Before gauging this effect, we remark that secondary boron is essentially produced by CNO nuclei, as indicated in Fig.~\ref{fig:6}. These are essentially primary species for which ${{\cal J}_b}$ is approximately given by the ratio ${Q_b}/(\sigma^{\rm diff} + \sigma_{b})$ and is proportional to the injection normalisation $N_b$. Furthermore, the relevant destruction cross-sections $\sigma_\text{C}$, $\sigma_\text{N}$ and $\sigma_\text{O}$ being approximately equal to each other, with an effective value ranging from 290 to 317\,mb, we conclude that the flux ratios ${{\cal J}_{b}}/{{\cal J}_\text{C}}$ are given by the corresponding ratios ${N_b}/{N_\text{C}}$ of the injection normalisation constants, with the consequence that relation~(\ref{eq:b_to_c}) simplifies to \begin{equation} \frac{{\cal J}_\text{B} (E_k)}{{\cal J}_\text{C} (E_k)} \simeq \sum_{ Z_b \ge Z_\text{C}}^{ Z_\text{max}} \frac{\sigma_{b \to \text{B}}}{\sigma^{\rm diff} + \sigma_\text{B}} \cdot \frac{N_{b}}{N_\text{C}} . \label{eq:b_to_c_s1} \end{equation} As mentioned at the beginning of this section, we first rescaled in our code all production $\sigma_{b \to a}$ and destruction $\sigma_{a}$ cross-sections by the same amount $\kappa$, which ranges from 0 to 2, to study how $D_{0}$ and $\delta$ are affected by this change. The results are summarised in the left panel of Fig.~\ref{fig:9}. The diffusion index $\delta$ does not suffer any change, whereas the diffusion normalisation $D_{0}$ increases linearly with the rescaling factor $\kappa$. Multiplying both $\sigma_{b \to \text{B}}$ and $\sigma_\text{B}$ by the same factor $\kappa$ in Eq.~(\ref{eq:b_to_c_s1}) amounts to dividing the diffusion cross section $\sigma^{\rm diff}$ by $\kappa$. The B/C ratio depends then on the ratio ${\sigma^{\rm diff}}/{\kappa}$, which scales as ${D_{0}}/{\kappa}$. The theoretical prediction on the B/C ratio is not altered as long as that ratio is kept constant, hence the exact scaling of $D_{0}$ with $\kappa$ displayed in the left panel of Fig.~\ref{fig:9}. The energy behaviour of the B/C ratio is not sensitive to the rescaling factor $\kappa$, which has been absorbed by $D_{0}$, and the fit yields the same spectral index $\delta$ irrespective of how much the cross-sections have been changed. Despite the relatively modest alterations, the effect discussed here has two qualitatively interesting consequences. To commence, a systematic uncertainty on the central value of $D_0$ at the 5 to 10\% level seems unavoidable due to the current uncertainty level of about 10\% on the nuclear cross-sections. Then, fully correlated changes in both production and destruction cross-sections can break the degeneracy between $D_0$ and $\delta$. \begin{figure}[!htp] \includegraphics[width=0.5\textwidth]{Fig9a.png} \includegraphics[width=0.5\textwidth]{Fig9b.png} \caption{Effect of rescaling nuclear cross-sections for boron production channels and destruction ones: the left panel assumes correlated, the right panel anti-correlated rescaling. }\label{fig:9} \end{figure} \vskip 0.1cm We now analyse the effects of an anti-correlated change of the production $\sigma_{b \to a}$ and destruction $\sigma_{a}$ cross-sections. Surprisingly, this has never been considered before, as far as we know, although the potential effect of this rescaling clearly is very strong. Multiplying $\sigma_{b \to \text{B}}$ by a factor $\kappa$ while rescaling $\sigma_\text{B}$ by a complementary factor of $(2 - \kappa)$ leads to the B/C ratio \begin{equation} \frac{{\cal J}_\text{B} (E_k)}{{\cal J}_\text{C} (E_k)} = \sum_{ Z_b \ge Z_\text{C}}^{ Z_\text{max}} \left\{ \frac{\sigma_{b \to \text{B}}}{(\sigma^{\text{diff}} + 2 \sigma_\text{B})/\kappa - \sigma_\text{B}} \right\} \frac{N_{b}}{N_\text{C}}. \label{eq:b_to_c_s2} \end{equation} Keeping the B/C ratio constant while increasing $\kappa$ at a given energy translates into keeping the ratio \begin{equation} \frac{\sigma^{\text{diff}} + 2 \sigma_\text{B}}{\kappa} = \frac{C E^\delta + 2 \sigma_\text{B}}{\kappa} \end{equation} roughly constant, where $C$ is a constant directly proportional to $D_0$. It can be immediately inferred that, when $\kappa$ increases, $C$ and $D_{0}$ have to increase and thus $\delta$ has to decrease. This trend is confirmed in the right panel of Fig.~\ref{fig:9}. From realistic assessments of the minimum systematic uncertainties of about 10\% derived from the different cross-section models, we estimate a systematic uncertainty of 10\% on $\delta$ and of 40\% on $D_{0}$. \section{Systematics related to CR propagation modelling}\label{propmodeling} A significant effort has been made in recent years to provide increasingly sophisticated modelling of the CR diffusion environment, source distribution, and alternative forms of CR transport. In this section we discuss a perhaps surprising conclusion: these effects are less relevant for the prediction of B/C than the effects discussed previously (which are instead usually neglected)! The message is: although the efforts invested by the community in refining CR propagation modelling could have and have had important implications for other observables, for the mere purpose of fitting B/C to infer diffusion propagation parameters they are to a large extent unnecessary complications, until one can significantly reduce the biases previously discussed. \subsection{Geometric effects} The crude modelling of the diffusive halo as an infinite slab may appear too simplistic. In this section, we estimate the effects of a 2D cylindrical diffusion box, modelled as in Fig.~(\ref{fig:10}). Furthermore, we assess the effect of adding a radial dependence in the injection term, as opposed to the uniform hypothesis. These can be seen as upper limits to reasonable systematics due to simplified description of the spatial dependence of the diffusion medium or source term: given our limited knowledge on this subject, even the most detailed modelling of the propagation medium and source term, in fact, may not be fully realistic. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{Fig10.png} \caption{Cylindric model: the matter is homogeneously distributed inside a thin disk of thickness $2h$ and radius $R_{gal}=20$\,kpc. The solar system is at $R_\odot\simeq 8$\,kpc from the Galactic centre.}\label{fig:10} \end{figure} The formalism in such a situation is well known and we do not repeat it here (it has been summarised for instance in ~\cite{Putze:2010zn}). It suffices to say that to take advantage of the cylindrical symmetry, Eq.(\ref{eqprop2}) can be projected on the basis of the zero order Bessel functions $J^i_0(r)=J_0\left(\xi_i\frac{r}{R_{gal}}\right)$ ensuring that the density vanishes on the edge of the cylinder of radius $R_{gal}=20$\,kpc. The flux of an isotope is then the sum over all its harmonic components \begin{equation} {\cal J}_{a}(E_k,R_\odot)=\sum_i^\infty J_0\left(\xi_i\frac{R_\odot}{R_{Gal}}\right){\cal J}_{a}^{i}(E_k)\,. \end{equation} The results, reported in Table~\ref{tab:geometry}, allow us to draw a few conclusions: \begin{itemize} \item the presence of a new escape surface at $R_{\rm gal}\simeq 20\,$kpc is basically irrelevant: the best-fit $\delta$ and its error remain the same, with a statistically insignificant, 2\% modification of the best-fit value of $D_0$; \item perhaps more surprisingly, even the replacement of a uniform source distribution with a commonly assumed donut distribution of the form\citepads{2004A&A...422..545Y} \begin{equation} q(r)\propto \displaystyle\left(\frac{r+0.55}{R_\odot+0.55}\right)^{1.64} \exp\left(-4.01\left(\frac{r-R_\odot}{R_\odot+0.55}\right)\right) \end{equation} has minor effects, a mere 1\% modification in the best-fit determination of $\delta$, and a $\sim 13\%$ lowering of the best-fit value of $D_0$, still statistically insignificant (roughly a 1$\,\sigma$ effect); \item since the goodness of fit is similar, the B/C observable is essentially insensitive to these improvements. Unless they are justified by the goal of matching or predicting other observables, the complication brought by the 2D modelling of the problem are unnecessary in achieving a good description of the data. \end{itemize} \begin{table*} \centering \begin{tabular}{|l|c|c|c|} \hline Geometry & Plane -- 1D & Cylindrical -- 2D & Cylindrical -- 2D \\ & & \small homogeneous source distribution & \small realistic source distribution \\ \hline \hline $D_{0}$ [kpc$^2$/Myr] & $(5.8\pm0.7)\cdot10^{-2}$ & $(5.7\pm0.7)\cdot10^{-2}$ & $(5.0\pm0.6)\cdot10^{-2}$\\ $\Delta D_{0}^{\text{1D}}/D_{0}^{\text{1D}}$ & N/A & $-2\%$ & $-13\%$\\ [2mm] $\delta$ & $0.441\pm0.031$ & $0.439\pm0.031$ & $0.445\pm0.032$ \\ $\Delta \delta^{\text{1D}}/\delta^{\text{1D}}$ & N/A & $0\%$ & $+1\%$\\[2mm] $\chi^2_{\text{B/C}}$/ndof & $5.4/8\approx0.68$ & $5.4/8\approx0.68$ & $5.5/8\approx0.69$\\ \hline \end{tabular} \caption{Results on the propagation parameters fitted on the B/C for different geometries.}\label{tab:geometry} \end{table*} \subsection{Convective wind} Although the high-energy CR propagation is mostly diffusive, the advection outside the Galactic plane (for instance due to stellar winds) has a non-negligible effect, which we now quantify. We adopted the simplest model of constant velocity wind, directed outside the galactic plane, with magnitude $u$. Taking this effect into account this effect, the 1D, stationary propagation equation can be written as \begin{align} \nonumber &-\frac{\partial}{\partial z} \left( D\frac{\partial}{\partial z} \psi_a \right) + \frac{\partial}{\partial z} \left(u \psi_a \right) -\frac{\partial}{\partial E} \left( \frac{1}{3}\frac{d u}{dz} E_k\frac{(E_k+2m)}{E_k+m} \psi_a \right) \\ &+ \delta(z)\sigma_{a} v \frac{\mu}{m_\text{ISM}} \psi_a = 2h \delta(z) q_{a} + \delta(z) \sum_{Z_b \geqslant Z_a}^{Z_\text{max}} \sigma_{b\to a} v \frac{\mu}{m_\text{ISM}} \psi_b , \label{eq:propwind} \end{align} The two new terms (second and third one on the LHS) account for the advection of the cosmic-ray density and the adiabatic losses, respectively. A characteristic time of these two processes can be estimated inside the thin disk of matter : \begin{align} \tau_\text{advection}&=\frac{h}{u}=\frac{0.1\,\text{kpc}}{20\,\text{km/s}} \nonumber \\&=5 \cdot \left(\frac{h}{0.1\,\text{kpc}}\right) \cdot \left(\frac{20\,\text{km/ s}}{u}\right)\,\text{My}, \end{align} and \begin{align} \tau_\text{adiabatic} &= \left(\frac{1}{3A}\left(\nabla u\right)\right)^{-1}\simeq 3A\frac{h}{u} \nonumber \\ &\approx 15\cdot \left(\frac{h}{0.1\,\text{kpc}}\right)\cdot \left(\frac{20\,\text{km/s}}{u}\right) \cdot \left(\frac{A}{1}\right)\,\text{My}. \end{align} This means that adiabatic losses can be safely neglected compared to the typical diffusion time of \begin{align} \tau&_\text{diffusion}\left({\cal R} > 10\,\text{GV}\right)< \tau_\text{diffusion} (10\,\text{GV}) = \frac{h\,H}{D (10\,\text{GV})} \nonumber \\ &= 2 \cdot \left(\frac{h}{0.1\,\text{kpc}}\right) \cdot \left(\frac{H}{4\,\text{kpc}}\right) \cdot \left(\frac{5.8 \cdot 10^{-2}\cdot (2 \cdot 10)^{0.44}\,\text{kpc$^2$/My}}{D}\right)\,\text{My}. \end{align} It is clear that our previous results provide a suitable first-order approximation at least at high energy, with the leading correction at energies near 10\,GeV/n given especially by the advection. The adiabatic energy loss, instead, is several times smaller and can be safely ignored in the following. The solution of Eq.~(\ref{eq:propwind}) neglecting adiabatic losses has the same form of Eq.~(\ref{eq:flux}) for the flux of stable species, modulo the change \begin{equation} D\rightarrow D'=\frac{H\,u}{1-\exp\left(-\frac{H\,u}{D}\right)},\label{dprime} \end{equation} so that the behaviour of the solution smoothly interpolates between the convective timescale at low energy and the diffusive one at high energy: this can be simply checked by neglecting the exponential with respect to unity for a high value of its argument, or Taylor-expanding it to first order in the opposite limit. This formula also suggests that, if one fits the data by neglecting the convective wind, one biases its result towards a lower value of $\delta$, and a corresponding higher value of $D$, so to reproduce a flatter dependence with energy at low-energy as for the case described by Eq.~(\ref{dprime}), as illustrated in Fig.~\ref{fig:11}. Quantitatively, a variation of $15\,$km/s in $u$ is roughly similar to a 1$\sigma$ shift in the benchmark parameters. Note, however, that the goodness of the fit worsens, or in other words, high-energy data are better described by a pure diffusive behaviour than by a convective-diffusive one. Overall, we conclude that these effects appear still somewhat less important in determining the diffusion parameters from high-energy data than the role of primary boron or even cross-section uncertainties. While convection, adiabatic losses, reacceleation, etc. are important to account for when extending the analysis down to very low energies (sub-GeV/nuc) or in global analyses, they do not currently constitute the main limitations to the determination of $D_0$ or $\delta$ from high-energy data. \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{Fig11.png} \caption{Variations of the best-fit propagation parameters with respect to the velocity of the convective wind.}\label{fig:11} \end{figure} \section{Conclusion}\label{conclusions} The high-precision measurements of cosmic-ray fluxes that have become available in recent years prompt the question of the theoretical uncertainties inherent to the models used to interpret them. In this article, we have compared the effect of different theoretical biases with statistical uncertainties in the determination of diffusion parameters from the boron-to-carbon flux ratio, or B/C. This is representative of a much broader class of observables, involving ratios of secondary to primary species, which have been recognised as key tools for diagnostics in cosmic-ray astrophysics. We adopted a pedagogical approach, showing and interpreting the results whenever possible within simple analytical models. We also used preliminary AMS-02 data and limited the analysis to energies above 10\,GeV/nuc, which gives a pessimistic---hence conservative---estimate of the statistical uncertainties that will eventually be available. \begin{table*} \centering \begin{tabular}{|l|c|c|c|c|} \hline & Wind & 1D/2D geometry& Cross-sections & Primary boron\\ \hline \hline $\Delta D_0/D_{0}$ & $-40\%$ & $-2$ to $-13\%$ & $\pm 60\%$ & $0$ to $-90\%$\\ [2mm] $\Delta \delta/\delta$ & $+15\%$ & $0$ to $+1\%$ & $\pm 20\%$ & $0$ to $+100\%$ \\ \hline \end{tabular} \caption{Summary of the main systematics found in current analyses in determining the propagation parameters by fitting the B/C ratio.}\label{finaltab} \end{table*} Our main results, summarised in Table~\ref{finaltab}, are the following: \begin{itemize} \item The single most important effect that we quantified (to the best of our knowledge, for the first time) is the degeneracy between diffusion parameters and a small injection of primary boron at the source, finding at present even a statistically insignificant preference for a small but finite value for a primary boron flux. This degeneracy cannot be removed by high-precision measurements of B/C, but probably requires multi-messenger tests and certainly demands further investigations, in particular if data should manifest a significant preference for a high-energy flattening of secondary-to-primary ratios. \item The second most important theoretical uncertainty is associated to cross-sections. In particular, anti-correlated modifications in the destruction and production cross-sections with respect to reference values may also have an effect on the determination of the diffusion index $\delta$, another effect discussed here for the first time. This should be kept in mind when comparing the outcome of data analyses relying on different databases for cross-sections. The good news is that this problem is not due to intrinsic limitations in the astrophysical modelling or the lack of astrophysical data, but to the scarce laboratory measurements available. For the case of boron, experiments of production cross-sections via spallation of oxygen, carbon and, to a minor extent, nitrogen, are essentially what would be needed to set the predictions on much firmer grounds. \item Other effects we tested for are typically less important and are similar to or smaller than statistical uncertainties: effects such as those of convective winds, certainly important in more complete analyses including low-energy data, appear unlikely to bring uncertainties large enough to compete with the above-mentioned uncertainties. We also showed how the geometry of the diffusive box and the distribution of sources is virtually irrelevant, at least if only a B/C data analysis is concerned. More or less realistic radial distribution of sources, while it may marginally affect the determination of $D_0$, is still indistinguishable from the goodness-of-fit point of view. Another outcome of this exercise is that at least at the 10\% level, $D_0$ is degenerate with a choice of geometry and source distribution, in addition to the already well-known degeneracy with the diffusive halo height $H$. \end{itemize} In conclusion, we found that the main uncertainties in inferring diffusion parameters from B/C (and we expect from other secondary-to-primary ratios, too) depend on theoretical priors on sources (linked to sites and mechanisms of acceleration!) and, to a lesser extent, to nuclear cross-sections. While exploring more complicated schemes and geometries for the diffusion may thus be important, we can anticipate that sensitivity to such effects will probably require fixing more mundane questions first! A multi-messenger strategy, coupled to a new measurement campaign of nuclear cross-sections, appears to be a next crucial step in that direction. \vskip 1.0cm \begin{acknowledgements} We would like to thank David Maurin for sharing useful data, notably Webber 2003 production cross-sections and expertise on cross-sections. Part of this work was supported by the French \emph{Institut universitaire de France}, by the French \emph{Agence Nationale de la Recherche} under contract 12-BS05-0006 DMAstroLHC, and by the \emph{Investissements d'avenir}, Labex ENIGMASS. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2015-07-22T02:07:51", "yymm": "1504", "arxiv_id": "1504.03134", "language": "en", "url": "https://arxiv.org/abs/1504.03134" }
\section{Introduction}\label{sec:intro} Throughout this paper, let $p$ be an odd prime. Denote by $\mathbb{F}_p$ a finite field with $p$ elements. An $[n, \kappa, l\,]$ linear code $C$ over $\mathbb{F}_p$ is a $\kappa$-dimensional subspace of $\mathbb{F}_p^n$ with minimum distance $l$. Moreover, the code is cyclic if every codeword $(c_0,c_1,\cdots,c_{n-1})\in C $ whenever $(c_{n-1},c_0,\cdots,c_{n-2})\in C $. Any cyclic code $C$ of length $n$ over $\mathbb{F}_p$ can be viewed as an ideal of $\mathbb{F}_p[x]/(x^n-1)$. Therefore, $C=\left\langle g(x)\right\rangle $, where $g(x)$ is the monic polynomial of lowest degree and divides $x^n-1$. Then $g(x)$ is called the generator polynomial and $h(x)=(x^n-1)/g(x)$ is called the parity-check polynomial \cite{macwilliams1977theory}. Let $A_i$ denote the number of codewords with Hamming weight $i$ in a linear code $C$ of length $n$. The weight enumerator of $C$ is defined by $$A_0+A_1x+A_2x^2+\cdots+A_nx^n,$$ where $A_0=1$. The sequence $(A_0,A_1,A_2,\cdots,A_n)$ is called the weight distribution of the code $C$. Cyclic codes have found wide applications in cryptography, error correction, association schemes and network coding due to their efficient encoding and decoding algorithms. However, there are still many open problems in coding theory (for details see \cite{charpin1998open,Ding2014binary,macwilliams1977theory}). It is an interesting subject to study the weight distribution of a linear code. Firstly, the information of the error correcting capability of a code is achieved from the weight distribution, i.e., the minimum distance $l$ is the minimum positive integer $i$ such that $A_i>0$. Secondly, the weight distribution of a cyclic code is closely related to the lower bound on the cardinality of a set of nonintersecting linear codes, which can be applied to prove the existence of resilient functions with high nonlinearity (see Theorem 4 of \cite{Johansson2003construction}). Finally, cyclic codes with few weights have found interesting applications in cryptography \cite{carlet2005linear,YD2006}. Therefore, the weight distribution is the major basis of computing the error probability of error detection and correction, and it is the primary tool of researching the structure of a code, improving the inner relationship of codewords for finding a new good code. We refer the reader to \cite{Ding2013cyclotomic} and \cite{Ding2014binary} given by Ding $et~al.$ for details on constructing optimal or almost optimal cyclic codes in the sense that they meet some bounds on linear codes. In recent years, much attention has been paid to evaluating the weight distribution of cyclic codes though it is usually an extremely difficult problem. However, they are known only in a few special cases. For example, the authors in \cite{ding2009weight,ding2013hamming,McEliece1975irre} studied the weight distributions of irreducible cyclic codes. For reducible cyclic codes, the authors in \cite{feng2007value,li2014hamming,luo2008cyclic,luo2008weight,zhou2014class} settled the weight distributions of cyclic codes whose duals have two zeros. The authors of \cite{zeng2010weight,zheng2014weight,zheng2013weight,Zhou2013fiveweight} dealt with a few classes of cyclic codes whose duals have three zeros. As for cyclic codes whose duals have arbitrary zeros, see \cite{liYue2014weight} or \cite{yangjing2013arbitrary} for example. Let $m$ and $k$ be two positive integers with $m>k$. For now on, we denote by $\alpha$ a primitive element of $\mathbb{F}_{p^m}$. Let $h_1(x)$ and $h_2(x)$ be the minimal polynomials of $\alpha^{-(p^k+1)}$ and $\alpha^{-1}$ over $\mathbb{F}_p$, respectively. Obviously, $h_1(x)$ and $h_2(x)$ are pairwise distinct and $\mathrm{deg}(h_2(x))=m$. Moreover, it can be easily shown that $\mathrm{deg}(h_1(x))=m/2$ if $m=2k$ and $m$ otherwise. Let $C_1$ and $C_2$ be two cyclic codes over $\mathbb{F}_p$ of length $n=p^m-1$ with parity-check polynomials $h_1(x)h_2(x)$ and $(x-1)h_1(x)$, respectively. Hence, the dimensions of $C_1$ and $C_2$ over $\mathbb{F}_p$ are $3m/2$ and ${m}/{2}+1$, respectively, if $m=2k$; and otherwise, the dimensions of $C_1$ and $C_2$ are $2m$ and $m+1$, respectively. Let $d=\mathrm{gcd}(k,m)$ denote the greatest common divisor of $k$ and $m$. Take $s={m}/{d}$. Note that the cyclic code $C_1$ was defined by Carlet, Ding and Yuan in \cite{carlet2005linear} and a tight lower bound on the minimum distance was also determined. Later, the authors in \cite{yuan2006weight} established the weight distribution of $C_1$ for odd $s$ (see also \cite{feng2007value,lichao2009covering}). However, to the best of our knowledge, there is no information about the weight distribution of $C_1$ in the case of even $s$. In this paper, we explicitly determine the weight distribution of the code $C_1$ for even $s$ and the weight distribution of the code $C_2$, respectively. Furthermore, the results show that both $C_1$ and $C_2$ are cyclic codes with few weights. In fact, the number of nonzero weights of these codes is no more than $five$. This means that the two classes of cyclic codes may be of use in cryptography \cite{mceliece1978public} and secret sharing schemes \cite{carlet2005linear}. The remainder of this paper is organized as follows. In Section \ref{sec:Preli}, we introduce some definitions and results on quadratic forms and exponential sums. Section \ref{sec:C1} investigates the weight distribution of the code $C_1$ for even $s$. Section \ref{sec:C2} studies the weight distribution of the code $C_2$. Section \ref{sec:conclusion} concludes this paper and makes some remarks on this topic. \section{Preliminaries}\label{sec:Preli} We follow the notations in Section \ref{sec:intro}. Let $q$ be a power of $p$ and $t$ be a positive integer. By identifying the finite field $\mathbb{F}_{q^t}$ with a $t$-dimensional vector space $\mathbb{F}^t_{q}$ over $\mathbb{F}_{q}$, a function $f(x)$ from $\mathbb{F}_{q^t}$ to $\mathbb{F}_{q}$ can be regarded as a $t$-variable polynomial over $\mathbb{F}_{q}$. The function $f(x)$ is called a quadratic form if it can be written as a homogeneous polynomial of degree two on $\mathbb{F}^t_{q}$ as follows: $$f(x_1,x_2,\cdots,x_t)=\sum_{1\leqslant i \leqslant j\leqslant t}a_{ij}x_ix_j,~~a_{ij}\in \mathbb{F}_{q}.$$ Here we fix a basis of $\mathbb{F}^t_{q}$ over $\mathbb{F}_{q}$ and identify each $x\in \mathbb{F}_{q^t}$ with a vector $(x_1,x_2,\cdots,x_t)\in\mathbb{F}^t_{q}$. The rank of the quadratic form $f(x)$, rank$(f)$, is defined as the codimension of the $\mathbb{F}_{q}$-vector space $$W=\{x\in \mathbb{F}_{q^t}|f(x+z)-f(x)-f(z)=0, ~~for~~all~~z\in \mathbb{F}_{q^t}\}.$$ Then $|W|=q^{t-\mathrm{rank}(f)}$. For a quadratic form $f(x)$ with $t$ variables over $\mathbb{F}_q$, there exists a symmetric matrix $A$ of order $t$ over $\mathbb{F}_q$ such that $f(x)=XAX'$, where $X=(x_1,x_2,\cdots,x_t)\in \mathbb{F}^t_q$ and $X'$ denotes the transpose of $X$. It is known that there exists a nonsingular matrix $B$ over $\mathbb{F}_q$ such that $BAB'$ is a diagonal matrix. Making a nonsingular linear substitution $X=YB$ with $Y=(y_1,y_2,\cdots,y_t)\in \mathbb{F}^t_q$, we have $$f(x)=Y(BAB')Y'=\sum^r_{i=1}a_iy^2_i,\,\,\,a_i\in \mathbb{F}^*_q,$$ where $r$ is the rank of $f(x)$. The determinant $\mathrm{det}(f)$ of $f(x)$ is defined to be the determinant of $A$, and $f(x)$ is said to be nondegenerate if $\mathrm{det}(f)\neq0$. The lemmas introduced below will turn out to be of use in the sequel. \begin{lemma}(See Theorems 5.15 and 5.33 of \cite{lidl1983finite})\label{lm: solu of simple quadra form} Let $\mathbb{F}_{p^t}$ be a finite field with $p^t$ elements and $\eta_t$ be the multiplicative quadratic character of $\mathbb{F}_{p^t}$. For $a\in\mathbb{F}^*_{p^t}$, \begin{eqnarray*} \sum_{x\in \mathbb{F}_{p^t}}\zeta^{\mathrm{Tr}^t_1(ax^2)}_p=\eta_t(a)(-1)^{t-1}(\sqrt{-1})^{\frac{t}{4}(p-1)^2}p^{\frac{t}{2}}, \end{eqnarray*}where $\zeta_p=e^{2\pi\sqrt{-1}/p}$ and $\mathrm{Tr}^t_1$ is a trace function from $\mathbb{F}_{p^t}$ to $\mathbb{F}_{p}$ defined by $$\mathrm{Tr}^t_1(x)=\sum^{t-1}_{i=0}x^{p^i},~~x\in \mathbb{F}_{p^t}.$$ \end{lemma} \begin{lemma}(See Theorems 6.26 and 6.27 of \cite{lidl1983finite})\label{lm: solution of quadra form} Let $f$ be a nondegenerate quadratic form over $\mathbb{F}_q$, $q=p^t$ for odd prime $p$, in $l$ variables. Define a function $\upsilon(\cdot)$ over $\mathbb{F}_q$ by $\upsilon(0)=q-1$ and $\upsilon(\rho)=-1$ for $\rho\in\mathbb{F}^*_q$. Then for $b\in\mathbb{F}_q$ the number of solutions of the equation $f(x_1,\cdots,x_l)=b$ is \begin{eqnarray*} \left\{\begin{array}{lll}q^{l-1}+\upsilon(b)q^{\frac{l-2}{2}}\eta_t\left((-1)^\frac{l}{2}\mathrm{det}(f)\right), &&if~~ l~~ is~~ even,\\ q^{l-1}+q^{\frac{l-1}{2}}\eta_t\left((-1)^\frac{l-1}{2}b~ \mathrm{det}(f)\right), &&if~~ l~~ is ~~odd,\\ \end{array} \right. \end{eqnarray*} where $\eta_t$ is the quadratic character of $\mathbb{F}_q$. \end{lemma} For convenience, we abbreviate the trace function $\mathrm{Tr}^m_1$ as $\mathrm{Tr}$ in the sequel. We will require the following lemma whose proof can be found in \cite{coulter1998explicit,draper2007explicit,yu2014weight}. \begin{lemma}\label{lm: exponentialsums} Let $S(a)=\sum_{x\in \mathbb{F}_{p^m}}\zeta_p^{\mathrm{Tr}(ax^{p^k+1})}$ and $d=\mathrm{gcd}(k,m)$. Let $\upsilon_2(\cdot)$ denote the 2-adic order function. Then $Q(x)=\mathrm{Tr}(ax^{p^k+1})$ is a quadratic form and for any $a\in \mathbb{F}^*_{p^m}$,\\ \textcircled{1} If $\upsilon_2(m)\leqslant \upsilon_2(k)$, then $\mathrm{rank}(Q(x))=m$ and \begin{eqnarray*} S(a)=\left\{\begin{array}{lll}~~\sqrt{(-1)^{\frac{p^d-1}{2}}}~p^{\frac{m}{2}}, &&\frac{p^m-1}{2}~~times,\\ -\sqrt{(-1)^{\frac{p^d-1}{2}}}~p^{\frac{m}{2}}, &&\frac{p^m-1}{2}~~times.\\ \end{array} \right. \end{eqnarray*} \textcircled{2} If $\upsilon_2(m)=\upsilon_2(k)+1$, then $\mathrm{rank}(Q(x))=m$ or $m-2d$ and \begin{eqnarray*} S(a)=\left\{\begin{array}{lll}-p^{\frac{m}{2}}, &&~~~\frac{p^d(p^m-1)}{p^d+1}~~times,\\ ~~p^{\frac{m}{2}+d}, &&~~~\frac{p^m-1}{p^d+1}~~~~~~times.\\ \end{array} \right. \end{eqnarray*} \textcircled{3} If $\upsilon_2(m)>\upsilon_2(k)+1$, then $\mathrm{rank}(Q(x))=m$ or $m-2d$ and \begin{eqnarray*} S(a)=\left\{\begin{array}{lll}~~p^{\frac{m}{2}}, &&~~\frac{p^d(p^m-1)}{p^d+1}~~times,\\ -p^{\frac{m}{2}+d}, &&~~\frac{p^m-1}{p^d+1}~~~~~~times.\\ \end{array} \right. \end{eqnarray*} \end{lemma} \begin{remark} The value of $S(a)$ and its frequency can be easily obtained from Corollary 7.6 of \cite{draper2007explicit} and the rank of $Q(x)$ can be deduced immediately from the value of $S(a)$. We mention that Lemma \ref{lm: exponentialsums} plays an important role in calculating the weight distributions of the cyclic codes $C_1$ and $C_2$ in the sequel. \end{remark} For later use, we define \begin{eqnarray} R_i=\{a\in \mathbb{F}^*_{p^m}\big|~ \mathrm{rank}(Q(x))=m-2di\}, ~~i\in \{0,1\}. \end{eqnarray} From Lemma \ref{lm: exponentialsums}, for $\upsilon_2(m)\leqslant \upsilon_2(k)$, we have \begin{eqnarray*} S(a)=\sqrt{(-1)^{\frac{p^d-1}{2}}}~\theta_0p^{\frac{m}{2}}, ~~\theta_0\in \{\pm1\}, \end{eqnarray*} and for $\upsilon_2(m)\geqslant \upsilon_2(k)+1$ with $i\in \{0,1\}$, \begin{eqnarray*} S(a)=\theta_ip^{\frac{m+2di}{2}}, ~~\theta_i\in \{\pm1\}. \end{eqnarray*} Two subsets $R_{i,j}$ of $R_i$ for $i\in \{0,1\}$ are defined as \begin{eqnarray}\label{Rij} R_{i,j}=\{a\in R_i\big|~\theta_i=j\},~~ j=\pm1. \end{eqnarray} Then, the value of each $|R_{i}|$ and $|R_{i,j}|$ can be computed by Lemma \ref{lm: exponentialsums}. Let $r=\mathrm{rank}(Q(x))$. By making a nonlinear substitution to $Q(x)$ and using Lemma \ref{lm: solu of simple quadra form} we have \begin{eqnarray}\label{eq:S(a)} S(a)&=&\sum_{x\in \mathbb{F}_{p^m}}\zeta_p^{\mathrm{Tr}(ax^{p^k+1})} =\sum_{x_1,\cdots,x_m\in \mathbb{F}_{p}}\zeta_p^{a_1x^2_1+\cdots+a_rx^2_r}\nonumber \\ &=&\eta\left(\prod^r_{i=1}a_i\right)(\sqrt{-1})^{\frac{r}{4}(p-1)^2}p^{\frac{r}{2}}p^{m-r}\nonumber \\ &=&\eta\left(\prod^r_{i=1}a_i\right)(\sqrt{-1})^{\frac{r}{4}(p-1)^2}p^{m-\frac{r}{2}}, \end{eqnarray} where $a_i\in \mathbb{F}^*_p$ for $i=1,\cdots,r$ and $\eta$ is the quadratic character over $\mathbb{F}_p$. In the sequel, we define $\Delta_i=\prod_{j=1}^{m-2di}a_j$ for $i\in \{0,1\}$. The following property will be needed to determine the weight distribution of cyclic codes. \begin{lemma}\label{lm:type of qua form} With notations as before. For $i\in \{0,1\}$ and $j=\pm1$, we have \begin{eqnarray}\label{eq:eta2} \eta\left((-1)^{[\frac{m-2di}{2}]}\Delta_i\right)=j~~ occurring~~ |R_{i,j}|~~ times, \end{eqnarray} where $[x]$ denotes the largest integer that is less than or equal to $x$. \end{lemma} \begin{proof} We only give the proof of the case that $\upsilon_2(m)\leqslant \upsilon_2(k)$ since the other cases can be proved in a similar way. We assume that $\upsilon_2(m)\leqslant \upsilon_2(k)$ for the rest of the proof. Thus, we only need to prove the desired conclusion in the case of $i=0$ since $r=m$. The discussion in this case is divided into the following subcases. If $\upsilon_2(m)\geqslant1$, then \begin{eqnarray*} \eta\left((-1)^{[\frac{m}{2}]}\Delta_0\right)=\eta\left((-1)^{\frac{m}{2}}\right)\eta(\Delta_0) =(\sqrt{-1})^{\frac{m}{4}(p-1)^2}\eta(\Delta_0), \end{eqnarray*} which is equal to the coefficient of $p^{m\!-\!\frac{r}{2}}$ in Equation \eqref{eq:S(a)} for $r=m$. Since $\sqrt{(-1)^{\frac{p^d-1}{2}}}=1$, the desired assertion holds for this subcase by Lemma \ref{lm: exponentialsums}. If $\upsilon_2(m)=0$, then \begin{eqnarray}\label{eq:1 or -1} (-1)^{[\frac{m}{2}]}=(-1)^{\frac{m-1}{2}}=\left\{\begin{array}{lll}~~1,&&~~if~~ m\equiv1\mod4,\\ -1,&&~~if~~ m\equiv3\mod4.\\ \end{array} \right. \end{eqnarray} Recall that $p$ is an odd prime. If $p\equiv1\mod4$, then $-1$ is a quadratic residue over $\mathbb{F}_p$. Therefore, \begin{eqnarray*} \eta\left((-1)^{[\frac{m}{2}]}\Delta_0\right) =\eta(\Delta_0)=(\sqrt{-1})^{\frac{m}{4}(p-1)^2}\eta(\Delta_0), \end{eqnarray*} which is also equal to the coefficient of $p^{\frac{m}{2}}$ in Equation \eqref{eq:S(a)}. Note that $\sqrt{(-1)^{\frac{p^d-1}{2}}}=1$. Hence, the desired assertion holds for this subcase. If $p\equiv3\mod4$, then $-1$ is a quadratic nonresidue over $\mathbb{F}_p$. By \eqref{eq:1 or -1}, we have \begin{eqnarray*} \eta\left((-1)^{[\frac{m}{2}]}\Delta_0\right) =\left\{\begin{array}{lll}~~\eta(\Delta_0),&&~~if~~ m\equiv1\mod4,\\ -\eta(\Delta_0),&&~~if~~ m\equiv3\mod4.\\ \end{array} \right. \end{eqnarray*} Note that $(\sqrt{-1})^{\frac{m}{4}(p-1)^2}$ equals to $\sqrt{-1}$ if $ m\equiv1\mod4$, and $-\sqrt{-1}$ if $ m\equiv3\mod4$. This implies that $\eta\left((-1)^{[\frac{m}{2}]}\Delta_0\right)$ is equal to the coefficient of $\sqrt{-1}p^{\frac{m}{2}}$. Since $\sqrt{(-1)^{\frac{p^d-1}{2}}}=\sqrt{-1}$, the desired assertion holds for this subcase. \hfill\space$\qed$ \end{proof} \section{The weight distribution of the code $C_1$}\label{sec:C1} We now focus on the weight distribution of the code $C_1$ as described in Section \ref{sec:intro}. It follows from Delsarte's Theorem \cite{delsarte1975subfield} that $$C_1=\{\mathsf{c}_1(a,b):a,b\in \mathbb{F}_{p^m}\},$$ where $\mathsf{c}_1(a,b)=(\text{Tr}(ax^{p^k+1}+bx))_{x\in \mathbb{F}^*_{p^m}}$. Let $N_{a,b}(0)$ be the number of solutions $x\in \mathbb{F}_{p^m}$ of the equation \begin{eqnarray}\label{N1:0} \text{Tr}(ax^{p^k+1}+bx)=0, \end{eqnarray} as $(a,b)$ runs through $\mathbb{F}^2_{p^m}$. For a given basis $\{\alpha_1,\alpha_2,\dots,\alpha_m\}$ of $\mathbb{F}_{p^m}$ over $\mathbb{F}_{p}$, each $x\in \mathbb{F}_{p^m}$ can be uniquely expressed as $x=\sum^m_{i=1}x_i\alpha_i$ with $x_i\in\mathbb{F}_{p}$. Therefore, by making a nonsingular linear substitution as introduced in Section \ref{sec:Preli}, Equation \eqref{N1:0} becomes \begin{eqnarray}\label{N1:0 2} \sum^m_{i=1}a_ix^2_i+\sum^m_{i=1}b_ix_i=0, \end{eqnarray} where $a_i,b_i\in\mathbb{F}_{p}$. Hence, $N_{a,b}(0)$ also represents the number of $(x_1,x_2,\dots,x_m)\in \mathbb{F}^m_{p}$ satisfying \eqref{N1:0 2}. Recall that $d=\mathrm{gcd}(k,m)$ and $s={m}/{d}$. Note that $s$ is odd if and only if $\upsilon_2(m)\leqslant \upsilon_2(k)$, and $s$ is even if and only if $\upsilon_2(m)\geqslant \upsilon_2(k)+1$. For the case of $s$ being odd, the references \cite{feng2007value,lichao2009covering,yuan2006weight} have given the weight distribution of $C_1$ independently. In the following, we establish the weight distribution of $C_1$ for even $s$. \begin{theorem}\label{thm:code a bx} With notation given before. If $\upsilon_2(m)\geqslant \upsilon_2(k)+1$ and $m\neq2k$, then $C_1$ is a cyclic code over $\mathbb{F}_p$ with parameters $[p^m-1,2m]$ and \\ \textcircled{1} If $\upsilon_2(m)=\upsilon_2(k)+1$, the weight distribution of $C_1$ is given as follows: \begin{eqnarray}\label{W1:31 m not equal 2k} \left\{\begin{array}{l}A_0=1,\\A_{(p-1)p^{m-1}}=(p^m-1)(1+p^{m-d}-p^{m-2d}), \\ A_{(p-1)(p^{m-1}+p^{\frac{m-2}{2}})}=(p^{m-1}-(p-1)p^{\frac{m-2}{2}})\frac{p^d(p^m-1)}{p^d+1}, \\ A_{(p-1)p^{m-1}-p^{\frac{m-2}{2}}}=(p-1)(p^{m-1}+p^{\frac{m-2}{2}})\frac{p^d(p^m-1)}{p^d+1}, \\ A_{(p-1)(p^{m-1}-p^{\frac{m+2d-2}{2}})}=(p^{m-2d-1}+(p-1)p^{\frac{m-2d-2}{2}})\frac{p^m-1}{p^d+1}, \\ A_{(p-1)p^{m-1}+p^{\frac{m+2d-2}{2}}}=(p-1)(p^{m-2d-1}-p^{\frac{m-2d-2}{2}})\frac{p^m-1}{p^d+1}.\\ \end{array} \right. \end{eqnarray} \textcircled{2} If $\upsilon_2(m)>\upsilon_2(k)+1$, the weight distribution of $C_1$ is given as follows: \begin{eqnarray}\label{W1:4 v2(m)>v2(k)+1} \left\{\begin{array}{l}A_0=1,\\A_{(p-1)p^{m-1}}=(p^m-1)(1+p^{m-d}-p^{m-2d}), \\ A_{(p-1)(p^{m-1}-p^{\frac{m-2}{2}})}=(p^{m-1}+(p-1)p^{\frac{m-2}{2}})\frac{p^d(p^m-1)}{p^d+1}, \\ A_{(p-1)p^{m-1}+p^{\frac{m-2}{2}}}=(p-1)(p^{m-1}-p^{\frac{m-2}{2}})\frac{p^d(p^m-1)}{p^d+1}, \\ A_{(p-1)(p^{m-1}+p^{\frac{m+2d-2}{2}})}=(p^{m-2d-1}-(p-1)p^{\frac{m-2d-2}{2}})\frac{p^m-1}{p^d+1}, \\ A_{(p-1)p^{m-1}-p^{\frac{m+2d-2}{2}}}=(p-1)(p^{m-2d-1}+p^{\frac{m-2d-2}{2}})\frac{p^m-1}{p^d+1}. \end{array} \right. \end{eqnarray} \end{theorem} \begin{proof} From the definition of $C_1$, we know that $C_1$ has length $p^m-1$ and dimension $2m$. The Hamming weight of every codeword $\mathsf{c}_1(a,b)$ can be determined by \begin{eqnarray}\label{W1:1234} \omega t(\mathsf{c}_1(a,b))&=&p^m-1-\#\{x\in \mathbb{F}^*_{p^m}\big|~\text{Tr}(ax^{p^k+1}+bx)=0\}\nonumber\\ &=&p^m-\#\{x\in \mathbb{F}_{p^m}\big|~\text{Tr}(ax^{p^k+1}+bx)=0\}\nonumber\\ &=&p^m-N_{a,b}(0). \end{eqnarray} It suffices to study the value distribution of $N_{a,b}(0)$. So, we calculate the weight distribution of the code $C_1$ in the following cases. \textcircled{1} $\upsilon_2(m)=\upsilon_2(k)+1$ and $m\neq2k$. The value of $N_{a,b}(0)$ will be calculated according to the choice of the parameter $a$. \emph{Case 1:} $a=0$. In this case, if $b=0$ then $N_{a,b}(0)=p^m$ occurring only once, and if $b \neq 0$ then $N_{a,b}(0)=p^{m-1}$ occurring $p^m-1$ times. \emph{Case 2:} $a\in R_0$. In this case, rank$(Q(x))=m$ and consequently every coefficient $a_i$ in \eqref{N1:0 2} is nonzero. For $1\leqslant i\leqslant m,$ let $x_i=y_i-\frac{b_i}{2a_i},$ then \eqref{N1:0 2} is equivalent to $\sum^m_{i=1} a_iy^2_i=\sum ^m_{i=1}\frac{b^2_i}{4a_i}$. It then follows from Lemma \ref{lm: solution of quadra form} that \begin{eqnarray}\label{eq13:N0}N_{a,b}(0)=p^{m-1}+\upsilon\left(\sum^m_{i=1}\frac{b^2_i}{4a_i}\right)p^{\frac{m-2}{2}}\eta((-1)^{\frac m2}\Delta_0). \end{eqnarray} Notice that the tuple $(b_1,\dots,b_m)$ runs through $\mathbb{F}^m_{p}$ as $b$ runs through $\mathbb{F}_{p^m}$. We can regard $\sum ^m_{i=1}\frac{b^2_i}{4a_i}$ as a quadratic form in $m$ variables $b_i$ for $1\leqslant i\leqslant m$. Again by Lemma \ref{lm: solution of quadra form}, as $b$ runs through $\mathbb{F}_{p^m}$, we obtain \begin{equation}\label{eq13:bi} \sum^m_{i=1}\frac{b^2_i}{4a_i}=\beta~~ occurring~~ p^{m-1}+\upsilon(\beta)p^{\frac{m-2}{2}}\eta((-1)^{\frac m2}\Delta_0)~~times, \end{equation} for each $\beta\in \mathbb{F}_p$, since $\eta((4^m\Delta_0)^{-1})=\eta(\Delta_0)$. From Lemmas \ref{lm: exponentialsums} and \ref{lm:type of qua form}, we have $\eta((-1)^{\frac m2}\Delta_0)=-1$ in this case. Therefore, by \eqref{eq13:N0} and \eqref{eq13:bi}, we find that \begin{eqnarray*} N_{a,b}(0)=\left\{\begin{array}{l}\!p^{m-1}\!-\!(p\!-\!1)p^{\frac{m-2}{2}}\\ ~~~~~~~~~occurring~~(p^{m-1}\!-\!(p\!-\!1)p^{\frac{m-2}{2}}) |R_{0,-1}|~~times,\\ \!p^{m-1}\!+\!p^{\frac{m-2}{2}}\\ ~~~~~~~~~occurring~~(p-1)(p^{m-1}\!+\!p^{\frac{m-2}{2}})|R_{0,-1}|~times.\\ \end{array} \right. \end{eqnarray*} \emph{Case 3:} $a\in R_1$. In this case, rank$(Q(x))=m-2d$ by Lemma \ref{lm: exponentialsums}. And consequently we can assume that the coefficients in \eqref{N1:0 2} satisfy $\prod^{m-2d}_{i=1}a_i\neq0$ and $a_i=0$ for $m-2d<i\leqslant m$. Then \eqref{N1:0 2} is equivalent to $$\sum^{m-2d}_{i=1}a_ix^2_i+\sum^{m}_{i=1}b_ix_i=0.$$ If there exists some $b_i\neq0$ for $m-2d<i\leqslant m,$ we can assume without loss of generality that $b_m\neq0$. Then $N_{a,b}(0)=p^{m-1}$, since we can substitute arbitrary elements of $\mathbb{F}_p$ for $x_1,\cdots,x_{m-1}$ and the value of $x_m$ is then uniquely determined. Furthermore, there are exactly $p^m-p^{m-2d}$ choices for $b$ such that there is at least one $b_i\neq0$ for $m-2d<i\leqslant m,$ as $b$ runs through $\mathbb{F}_{p^m}$. If $b_i=0$ for all $m-2d<i\leqslant m$, then the substitution $x_i=y_i-\frac{b_i}{2a_i}$ for $1\leqslant i\leqslant m-2d$ yields $$\sum^{m-2d}_{i=1}a_iy^2_i=\sum^{m-2d}_{i=1}\frac{b^2_i}{4a_i}.$$ Notice that $m-2d$ is even. By Lemmas \ref{lm: solution of quadra form}, \ref{lm: exponentialsums} and \ref{lm:type of qua form}, we obtain \begin{eqnarray*} N_{a,b}(0)&=&p^{2d}\left(p^{m-2d-1}+\upsilon\left(\sum^{m-2d}_{i=1}\frac{b^2_i}{4a_i}\right)p^{\frac{m-2d-2}{2}}\eta((-1)^{\frac {m-2d}2}\Delta_1)\right)\\ &=&p^{m-1}+\upsilon\left(\sum^{m-2d}_{i=1}\frac{b^2_i}{4a_i}\right)p^{\frac{m+2d-2}{2}}\\ &=&\left\{\begin{array}{l}p^{m-1}+(p-1)p^{\frac{m+2d-2}{2}}~\\ ~~~~~~~~~occurring~~(p^{m-2d-1}\!+\!(p\!-\!1)p^{\frac{m-2d-2}{2}})|R_{1,1}|~~times,\\ p^{m-1}-p^{\frac{m+2d-2}{2}}\\ ~~~~~~~~~occurring~~(p-1)(p^{m-2d-1}\!-\!p^{\frac{m-2d-2}{2}})|R_{1,1}|~~times,\\ \end{array} \right. \end{eqnarray*} since in this case, $\eta((-1)^{\frac {m-2d}2}\Delta_1)=1$. By the discussion above, we will get the result for case $\upsilon_2(m)=\upsilon_2(k)+1$ and $m\neq2k$ described in \eqref{W1:31 m not equal 2k}. Here we only give the frequencies of the codewords with weight $(p-1)p^{m-1}$ and $(p-1)(p^{m-1}+p^{\frac{m-2}{2}})$. Other cases can be proved in a similar manner. The weight of $\mathsf{c}_1(a,b)$ is equal to $(p-1)p^{m-1}$ if and only if $N_{a,b}(0)=p^{m-1}$. According to the above analysis, the frequency is \begin{eqnarray*}p^m&-&1+(p^m-p^{m-2d})|R_{1,1}|\\ &=&p^m-1+(p^m-p^{m-2d})\frac{p^m-1}{p^d+1}\\ &=&(p^m-1)(1+p^{m-d}-p^{m-2d}).\end{eqnarray*} The weight of $\mathsf{c}_1(a,b)$ is equal to $(p-1)(p^{m-1}+p^{\frac{m-2}{2}})$ if and only if $N_{a,b}(0)=p^{m-1}-(p-1)p^{\frac {m-2}2}$. The frequency is equal to $$(p^{m-1}-(p-1)p^{\frac{m-2}{2}})|R_{0,-1}|=(p^{m-1}-(p-1)p^{\frac{m-2}{2}})\frac{p^d(p^m-1)}{p^d+1}.$$ \textcircled{2}$\upsilon_2(m)>\upsilon_2(k)+1$. The value of $N_{a,b}(0)$ will be computed by distinguishing among the following cases. \emph{Case 1:} $a=0$. In this case, if $b=0$ then $N_{a,b}(0)=p^m$, and this value occurs only once, and if $b \neq 0$ then $N_{a,b}(0)=p^{m-1}$, and this value occurs $p^m-1$ times. \emph{Case 2:} $a\in R_0$. In this case, rank$(Q(x))=m$ by Lemma \ref{lm: exponentialsums} and consequently every coefficient $a_i$ in \eqref{N1:0 2} is nonzero. For $1\leqslant i\leqslant m,$ let $x_i=y_i-\frac{b_i}{2a_i},$ then \eqref{N1:0 2} is equivalent to $\sum^m_{i=1} a_iy^2_i=\sum ^m_{i=1}\frac{b^2_i}{4a_i}$. According to Lemma \ref{lm: solution of quadra form}, we have \begin{eqnarray}\label{eq14:N0}N_{a,b}(0)=p^{m-1}+\upsilon\left(\sum^m_{i=1}\frac{b^2_i}{4a_i}\right)p^{\frac{m-2}{2}}\eta((-1)^{\frac m2}\Delta_0). \end{eqnarray} Note that $\sum ^m_{i=1}\frac{b^2_i}{4a_i}$ can be regarded as a quadratic form in $m$ variables $b_i$ for $1\leqslant i\leqslant m$. Again by Lemma \ref{lm: solution of quadra form}, as $b$ runs through $\mathbb{F}_{p^m}$, we obtain \begin{equation}\label{eq14:bi}\sum^m_{i=1}\frac{b^2_i}{4a_i}=\beta~~ occurring~~ p^{m-1}+\upsilon(\beta)p^{\frac{m-2}{2}}\eta((-1)^{\frac m2}\Delta_0)~~ times, \end{equation} for every $\beta\in \mathbb{F}_p$. By Lemmas \ref{lm: exponentialsums} and \ref{lm:type of qua form}, we have $\eta((-1)^{\frac m2}\Delta_0)=1$ in this case. Therefore, combining \eqref{eq14:N0} and \eqref{eq14:bi} gives \begin{eqnarray*} N_{a,b}(0)=\left\{\begin{array}{l}p^{m-1}+(p-1)p^{\frac{m-2}{2}}\\ ~~~~~~~~~occurring~(p^{m-1}+(p-1)p^{\frac{m-2}{2}}) |R_{0,1}|~~times,\\ p^{m-1}-p^{\frac{m-2}{2}}\\ ~~~~~~~~~occurring~~(p-1)(p^{m-1}-p^{\frac{m-2}{2}})|R_{0,1}|~~times.\\ \end{array} \right. \end{eqnarray*} \emph{Case 3:} $a\in R_1$. In this case, rank$(Q(x))=m-2d$ by Lemma \ref{lm: exponentialsums}. Similarly, suppose that the coefficients in \eqref{N1:0 2} satisfy $\prod^{m-2d}_{i=1}a_i\neq0$ and $a_i=0$ for $m-2d<i\leqslant m$. Then \eqref{N1:0 2} is equivalent to $$\sum^{m-2d}_{i=1}a_ix^2_i+\sum^{m}_{i=1}b_ix_i=0.$$ If there exists some $b_i\neq0$ for $m-2d<i\leqslant m,$ then $N_{a,b}(0)=p^{m-1}$ and there are exactly $p^m-p^{m-2d}$ choices for $b$ such that there is at least one $b_i\neq0$ for $m-2d<i\leqslant m,$ as $b$ runs through $\mathbb{F}_{p^m}$. If $b_i=0$ for all $m-2d<i\leqslant m$, then the substitution $x_i=y_i-\frac{b_i}{2a_i}$ for $1\leqslant i\leqslant m-2d$ yields $$\sum^{m-2d}_{i=1}a_iy^2_i=\sum^{m-2d}_{i=1}\frac{b^2_i}{4a_i}.$$ It then follows from Lemmas \ref{lm: solution of quadra form}, \ref{lm: exponentialsums} and \ref{lm:type of qua form} that \begin{eqnarray*} N_{a,b}(0)&=&p^{2d}\left(p^{m-2d-1}+\upsilon\left(\sum^{m-2d}_{i=1}\frac{b^2_i}{4a_i}\right)p^{\frac{m-2d-2}{2}}\eta((-1)^{\frac {m-2d}2}\Delta_1)\right)\\ &=&p^{m-1}-\upsilon\left(\sum^{m-2d}_{i=1}\frac{b^2_i}{4a_i}\right)p^{\frac{m+2d-2}{2}}\\ &=&\left\{\begin{array}{l}p^{m-1}-(p-1)p^{\frac{m+2d-2}{2}}\\ ~~~~~~~~~occurring~~(p^{m-2d-1}\!-\!(p\!-\!1)p^{\frac{m-2d-2}{2}})|R_{1,-1}|~~times,\\ p^{m-1}+p^{\frac{m+2d-2}{2}}\\ ~~~~~~~~~occurring~~(p\!-\!1)(p^{m-2d-1}\!+\!p^{\frac{m-2d-2}{2}})|R_{1,-1}|~~times,\\ \end{array} \right. \end{eqnarray*} since $\eta((-1)^{\frac {m-2d}2}\Delta_1)=-1$. Combining all above cases and using Equation \eqref{W1:1234}, we will get the result for case $\upsilon_2(m)>\upsilon_2(k)+1$ described in \eqref{W1:4 v2(m)>v2(k)+1}. Here we give the frequencies of the codewords with weight $(p-1)p^{m-1}$ and $(p-1)(p^{m-1}-p^{\frac{m-2}{2}})$. Other cases can be obtained in a similar manner. The weight of $\mathsf{c}_1(a,b)$ is equal to $(p-1)p^{m-1}$ if and only if $N_{a,b}(0)=p^{m-1}$. By the above argument, we see that the frequency is \begin{eqnarray*}p^m&-&1+(p^m-p^{m-2d})|R_{1,-1}|\\ &=&p^m-1+(p^m-p^{m-2d})\frac{p^m-1}{p^d+1}\\ &=&(p^m-1)(1+p^{m-d}-p^{m-2d}).\end{eqnarray*} The weight of $\mathsf{c}_1(a,b)$ is equal to $(p-1)(p^{m-1}-p^{\frac{m-2}{2}})$ if and only if $N_{a,b}(0)=p^{m-1}+(p-1)p^{\frac {m-2}2}$. The frequency is equal to $$(p^{m-1}+(p-1)p^{\frac{m-2}{2}})|R_{0,1}|=(p^{m-1}+(p-1)p^{\frac{m-2}{2}})\frac{p^d(p^m-1)}{p^d+1}.$$ This completes the whole proof of Theorem \ref{thm:code a bx}. \hfill\space$\qed$ \end{proof} \begin{corollary}\label{coro:C1} If $m=2k$, then $C_1$ is a cyclic code over $\mathbb{F}_p$ with parameters $[p^m-1,3m/2]$ and the weight distribution is given as follows: \begin{eqnarray}\label{W1:32 m=2k} \left\{\begin{array}{l}A_0=1,\\A_{(p-1)p^{m-1}}=p^m-1,\\ A_{(p-1)(p^{m-1}+p^{\frac{m-2}{2}})}=(p^{m-1}-(p-1)p^{\frac{m-2}{2}})(p^{\frac{m}{2}}-1), \\ A_{(p-1)p^{m-1}-p^{\frac{m-2}{2}}}=(p-1)(p^{m-1}+p^{\frac{m-2}{2}})(p^{\frac{m}{2}}-1).\\ \end{array} \right. \end{eqnarray} \end{corollary} \begin{proof} Let $K=\{x\in\mathbb{F}_{p^m}\big|~x^{p^k}+x=0\}$. It is easy to check that $\mathsf{c}_1(a,b)=\mathsf{c}_1(a+\delta,b)$ for any $\delta\in K$ and $\mathsf{c}_1(a,b)\in C_1$. Hence, $C_1$ is degenerate with dimension $3m/2$ over $\mathbb{F}_p$. Note that $|K|=p^{\frac{m}{2}}$ and in this case $\upsilon_2(m)=\upsilon_2(k)+1$. Substituting $d=m/2$ to Equation \eqref{W1:31 m not equal 2k} and dividing each $A_i$ by $p^{\frac{m}{2}}$, we get the result given in \eqref{W1:32 m=2k}. This finishes the proof of Corollary~\ref{coro:C1}. \hfill\space$\qed$ \end{proof} \begin{remark} It should be noted that, for $s$ being even, the weight distribution of the code $C_1$ is determined by Theorem \ref{thm:code a bx} and Corollary \ref{coro:C1}. The results show that $C_1$ is a cyclic code with three or five weights. \end{remark} We give some examples for the code $C_1$ in the case of $\upsilon_2(m)\geqslant\upsilon_2(k)+1$, i.e., $s$ is even, which is not included in \cite{feng2007value,lichao2009covering,yuan2006weight}. \begin{example} Let $m=6,k=1,p=3$. This corresponds to the case $\upsilon_2(m)=\upsilon_2(k)+1$ and $m\neq2k$. Using Magma, $C_1$ is a [728, 12, 432] cyclic linear code over $\mathbb{F}_3$ with the weight distribution: \begin{eqnarray*} &&A_0=1,A_{432}=6006,A_{477}=275184,A_{486}=118664,\\&&A_{504}=122850,A_{513}=8736, \end{eqnarray*} which verifies the result of Equation \eqref{W1:31 m not equal 2k} in Theorem \ref{thm:code a bx}. \end{example} \begin{example} Let $m=4,k=1,p=5$. This corresponds to the case $\upsilon_2(m)>\upsilon_2(k)+1$. Using Magma, $C_1$ is a [624, 8, 475] cyclic linear code over $\mathbb{F}_5$ with the weight distribution: \begin{eqnarray*} &&A_0=1,A_{475}=2496,A_{480}=75400,A_{500}=63024,\\&& A_{505}=249600,A_{600}=104, \end{eqnarray*}which verifies the result of Equation \eqref{W1:4 v2(m)>v2(k)+1} in Theorem \ref{thm:code a bx}. \end{example} \section{The weight distribution of the code $C_2$}\label{sec:C2} In this section, we will study the weight distribution of the code $C_2$ as described in Section \ref{sec:intro}. By the well-known Delsarte's Theorem \cite{delsarte1975subfield}, we have $$C_2=\{\mathsf{c}_2(a,c):a,c\in \mathbb{F}_{p^m}\},$$ where $\mathsf{c}_2(a,c)=(\text{Tr}(ax^{p^k+1}+c))_{x\in \mathbb{F}^*_{p^m}}$. For any two codewords $\mathsf{c}_2(a_1,c_1)$ and $\mathsf{c}_2(a_2,c_2)$ in $C_2$ given above, it is easy to verify that $\mathsf{c}_2(a_1,c_1)=\mathsf{c}_2(a_2,c_2)$ if and only if $a_1=a_2$ and $\text{Tr}(c_1)=\text{Tr}(c_2)$. Hence, $C_2$ can be expressed as $$C_2=\{\mathsf{c}_2(a,\lambda)=(\text{Tr}(ax^{p^k+1})-\lambda)_{x\in \mathbb{F}^*_{p^m}}:a\in \mathbb{F}_{p^m},\lambda\in\mathbb{F}_{p}\},$$ where $\lambda=-\text{Tr}(c)$. Let $N_{a,\lambda}(0)$ be the number of solutions $x\in \mathbb{F}_{p^m}$ satisfying \begin{eqnarray}\label{N2:0} \text{Tr}(ax^{p^k+1})-\lambda=0, \end{eqnarray} as $(a,\lambda)$ runs through $\mathbb{F}_{p^m}\times\mathbb{F}_{p}$. By making a nonsingular linear substitution as introduced in Section \ref{sec:Preli}, Equation \eqref{N2:0} is equivalent to \begin{eqnarray}\label{N2:0 2} \sum^m_{i=1}a_ix^2_i=\lambda, \end{eqnarray} where $a_i\in\mathbb{F}_{p}$. Thus, $N_{a,\lambda}(0)$ also represents the number of $(x_1,x_2,\dots,x_m)\in \mathbb{F}^m_{p}$ satisfying \eqref{N2:0 2}. In the following, we establish the weight distribution of the code $C_2$ when $(a,\lambda)$ runs through $\mathbb{F}_{p^m}\times\mathbb{F}_{p}$. \begin{theorem}\label{thm:code a c}With notation as above. If $m\neq2k$, then $C_2$ is a cyclic code over $\mathbb{F}_p$ with parameters $[p^m-1,m+1]$ and\\ \textcircled{1} If $0=\upsilon_2(m)\leqslant \upsilon_2(k)$, the weight distribution of $C_2$ is given as follows: \begin{eqnarray}\label{W2:1 v2(m)=0} \left\{\begin{array}{l}A_0=1,\\A_{p^m-1}=p-1,\\ A_{(p-1)p^{m-1}}=p^m-1,\\ A_{(p-1)p^{m-1}-p^{\frac{m-1}2}-1}=\frac{p-1}{2}(p^m-1),\\ A_{(p-1)p^{m-1}+p^{\frac{m-1}2}-1}=\frac{p-1}{2}(p^m-1).\\ \end{array} \right. \end{eqnarray} \textcircled{2} If $1\leqslant \upsilon_2(m)\leqslant \upsilon_2(k)$, the weight distribution of $C_2$ is given as follows: \begin{eqnarray}\label{W2:2 1<=v2(m)} \left\{\begin{array}{l}A_0=1,\\A_{p^m-1}=p-1,\\ A_{(p-1)p^{m-1}-p^{\frac{m-2}2}-1}=\frac{p-1}{2}(p^m-1),\\ A_{(p-1)p^{m-1}+p^{\frac{m-2}2}-1}=\frac{p-1}{2}(p^m-1), \\ A_{(p-1)(p^{m-1}-p^{\frac{m-2}2})}=\frac{1}{2}(p^m-1), \\ A_{(p-1)(p^{m-1}+p^{\frac{m-2}2})}=\frac{1}{2}(p^m-1). \end{array} \right. \end{eqnarray} \textcircled{3} If $v_2(m)=v_2(k)+1$, the weight distribution of $C_2$ is given as follows: \begin{eqnarray}\label{W2:31 m not equal 2k} \left\{\begin{array}{l}A_0=1,\\A_{p^m-1}=p-1,\\ A_{(p-1)(p^{m-1}+p^{\frac{m-2}2})}=\frac{p^d(p^m-1)}{p^d+1},\\ A_{(p-1)p^{m-1}-p^{\frac{m-2}2}-1}=\frac{p^d(p-1)(p^m-1)}{p^d+1},\\ A_{(p-1)(p^{m-1}-p^{\frac{m+2d-2}2})}=\frac{p^m-1}{p^d+1},\\ A_{(p-1)p^{m-1}+p^{\frac{m+2d-2}2}-1}=\frac{(p-1)(p^m-1)}{p^d+1}. \end{array} \right. \end{eqnarray} \textcircled{4} If $\upsilon_2(m)>\upsilon_2(k)+1$, the weight distribution of $C_2$ is given as follows: \begin{eqnarray}\label{W2:4 v2(m)>v2(k)+1} \left\{\begin{array}{l}A_0=1,\\A_{p^m-1}=p-1,\\ A_{(p-1)(p^{m-1}-p^{\frac{m-2}2})}=\frac{p^d(p^m-1)}{p^d+1},\\ A_{(p-1)p^{m-1}+p^{\frac{m-2}2}-1}=\frac{p^d(p-1)(p^m-1)}{p^d+1},\\ A_{(p-1)(p^{m-1}+p^{\frac{m+2d-2}2})}=\frac{(p^m-1)}{p^d+1},\\ A_{(p-1)p^{m-1}-p^{\frac{m+2d-2}2}-1}=\frac{(p-1)(p^m-1)}{p^d+1}. \end{array} \right. \end{eqnarray} \end{theorem} \begin{proof} The length and dimension follow immediately from the definition of the code $C_2$. The Hamming weight of every codeword $\mathsf{c}_2(a,\lambda)$ can be determined by \begin{eqnarray}\label{W2:1234} \omega t(\mathsf{c}_2(a,\lambda)) &=&p^m-1-\#\{x\in \mathbb{F}^*_{p^m}\big|~\text{Tr}(ax^{p^k+1})-\lambda=0\}\nonumber\\ &=&\left\{\begin{array}{ll}p^m-N_{a,\lambda}(0), &if~~ \lambda=0,\\ p^m-1-N_{a,\lambda}(0), &if ~~\lambda\neq 0, \\ \end{array} \right. \end{eqnarray}where $\lambda=-\text{Tr}(c)$. We will calculate the weight distribution of the code $C_2$ by distinguishing the following cases. \textcircled{1} $0=\upsilon_2(m)\leqslant \upsilon_2(k)$. The value of $N_{a,\lambda}(0)$ will be computed according to the choice of the parameter $a$. \emph{Case 1:} $a=0$. In this case, if $\lambda=0$ then $N_{a,\lambda}(0)=p^m$, and this value occurs only once, and if $\lambda \neq 0$ then $N_{a,\lambda}(0)=0$, and this value occurs $p-1$ times. \emph{Case 2:} $a\in \mathbb{F}^*_{p^m}$, i.e., $a\in R_0$. In this case, rank$(Q(x))=m$ by Lemma \ref{lm: exponentialsums} and consequently every coefficient $a_i$ in \eqref{N2:0 2} is nonzero. From Lemma \ref{lm: solution of quadra form}, we have $$ N_{a,\lambda}(0)= p^{m-1}+p^{\frac{m-1}2}\eta({(-1)}^{\frac{m-1}2}\lambda\Delta_0). $$ If $\lambda=0$ then $N_{a,\lambda}(0)=p^{m-1}$, and this value occurs $p^m-1$ times. If $\lambda \neq 0$, then there are $(p-1)/2$ squares and nonsquares in $\mathbb{F}^*_p$, respectively. If $\lambda$ is a square in $\mathbb{F}^*_p$, then $$N_{a,\lambda}(0)=p^{m-1}+ p^{\frac{m-1}2}\eta((-1)^{\frac{m-1}2}\Delta_0).$$ Using Lemma \ref{lm: exponentialsums} and Lemma \ref{lm:type of qua form}, we find that \begin{equation*} N_{a,\lambda}(0)=\left\{\begin{array}{ll}p^{m-1}+ p^{\frac{m-1}2}&~~occurring~~ \frac{p-1}2|R_{0,1}|~~~~times,\\ p^{m-1}- p^{\frac{m-1}2} &~~occurring~~ \frac{p-1}2|R_{0,-1}| ~~times.\\ \end{array}\right. \end{equation*} Similarly, if $\lambda$ is a nonsquare in $\mathbb{F}^*_p$, then $$N_{a,\lambda}(0)=p^{m-1}- p^{\frac{m-1}2}\eta((-1)^{\frac{m-1}2}\Delta_0).$$ This leads to \begin{equation*} N_{a,\lambda}(0)=\left\{\begin{array}{ll}p^{m-1}- p^{\frac{m-1}2}&~~occurring~~ \frac{p-1}2|R_{0,1}|~~~~times,\\ p^{m-1}+ p^{\frac{m-1}2} &~~occurring~~ \frac{p-1}2|R_{0,-1}| ~~times.\\ \end{array}\right. \end{equation*} By Equation \eqref{W2:1234} and the above analysis, we will derive the result for case $0=\upsilon_2(m)\leqslant \upsilon_2(k)$ described in \eqref{W2:1 v2(m)=0}. Here we give the frequencies of the codewords with weight $(p-1)p^{m-1}$ and $(p-1)p^{m-1}- p^{\frac {m-1}2}-1$. Other cases can be analyzed in a similar way. The weight of $\mathsf{c}_2(a,\lambda)$ is equal to $(p-1)p^{m-1}$ if and only if $N_{a,\lambda}(0)=p^{m-1}$ and $\lambda=0$. Thus the above argument shows that the frequency is $p^m-1.$ The weight of $\mathsf{c}_2(a,\lambda)$ is equal to $(p-1)p^{m-1}- p^{\frac {m-1}2}-1$ if and only if $N_{a,\lambda}(0)=p^{m-1}+p^{\frac {m-1}2}$ and $\lambda\neq0$. The frequency is equal to $$\frac{p-1}{2}(|R_{0,1}|+|R_{0,-1}|)=\frac{p-1}{2}(p^m-1).$$ \textcircled{2} $1\leqslant \upsilon_2(m)\leqslant \upsilon_2(k).$ The value of $N_{a,\lambda}(0)$ will be calculated by distinguishing the case $a=0$ from the case $a\neq0$. \emph{Case 1:} $a=0$. In this case, if $\lambda=0$ then $N_{a,\lambda}(0)=p^m$, and this value occurs only once, and if $\lambda \neq 0$ then $N_{a,\lambda}(0)=0$, and this value occurs $p-1$ times. \emph{Case 2:} $a\in \mathbb{F}^*_{p^m}$, i.e., $a\in R_0$. In this case, rank$(Q(x))= m$ by Lemma \ref{lm: exponentialsums} and consequently every coefficient $a_i$ in \eqref{N2:0 2} is nonzero. Applying Lemma \ref{lm: solution of quadra form} gives that $$ N_{a,\lambda}(0) = p^{m-1}+\upsilon(\lambda)p^{\frac{m-2}2}\eta({(-1)}^{\frac{m}2}\Delta_0). $$ If $\lambda=0$ then $$N_{a,\lambda}(0)=p^{m-1}+(p-1)p^{\frac{m-2}2}\eta({(-1)}^{\frac{m}2}\Delta_0).$$ It then follows from Lemmas \ref{lm: exponentialsums} and \ref{lm:type of qua form} that \begin{equation*} N_{a,\lambda}(0)=\left\{\begin{array}{ll}p^{m-1}+(p-1)p^{\frac{m-2}2}&~~occurring~~ |R_{0,1}|~~~~times,\\ p^{m-1}-(p-1)p^{\frac{m-2}2} &~~occurring~~ |R_{0,-1}| ~~times.\\ \end{array}\right. \end{equation*} If $\lambda \neq 0$ then $$N_{a,\lambda}(0)=p^{m-1}-p^{\frac{m-2}2}\eta({(-1)}^{\frac{m}2}\Delta_0).$$ Again by Lemmas \ref{lm: exponentialsums} and \ref{lm:type of qua form}, we have \begin{equation*} N_{a,\lambda}(0)=\left\{\begin{array}{ll}p^{m-1}- p^{\frac{m-2}2}&~~occurring~~ (p-1)|R_{0,1}|~~~~times,\\ p^{m-1}+p^{\frac{m-2}2} &~~occurring~~ (p-1)|R_{0,-1}| ~~times.\\ \end{array}\right. \end{equation*} By Equation \eqref{W2:1234} and the above analysis, we will get the result for case $1\leqslant \upsilon_2(m)\leqslant \upsilon_2(k)$ described in \eqref{W2:2 1<=v2(m)}. Here we give the frequencies of the codewords with weight $(p-1)(p^{m-1}-p^{\frac{m-2}2})$ and $(p-1)p^{m-1}-p^{\frac{m-2}2}-1$. Other cases can be similarly verified. The weight of $\mathsf{c}_2(a,\lambda)$ is equal to $(p-1)(p^{m-1}-p^{\frac{m-2}2})$ if and only if $N_{a,\lambda}(0)=p^{m-1}+(p-1)p^{\frac{m-2}2}$ and $\lambda=0$. Based on the above discussion, the frequency is $|R_{0,1}|=\frac{p^m-1}{2}.$ The weight of $\mathsf{c}_2(a,\lambda)$ is equal to $(p-1)p^{m-1}- p^{\frac {m-2}2}-1$ if and only if $N_{a,\lambda}(0)=p^{m-1}+p^{\frac {m-2}2}$ and $\lambda\neq0$. Therefore, the frequency is equal to $$(p-1)|R_{0,-1}|=\frac{1}{2}(p-1)(p^m-1).$$ \textcircled{3}Let $v_2(m)=v_2(k)+1$ and $m\neq2k$. The value of $N_{a,\lambda}(0)$ will be calculated by distinguishing among the following cases. \emph{Case 1:} $a=0$. In this case, if $\lambda=0$ then $N_{a,\lambda}(0)=p^m$, and this value occurs only once, and if $\lambda \neq 0$ then $N_{a,\lambda}(0)=0$, and this value occurs $p-1$ times. \emph{Case 2:} $a\in R_0$. In this case, rank$(Q(x))=m$ and consequently every coefficient $a_i$ in \eqref{N2:0 2} is nonzero. From Lemma \ref{lm: solution of quadra form}, we have $$N_{a,\lambda}(0)=p^{m-1}+\upsilon(\lambda)p^{\frac{m-2}{2}}\eta((-1)^{\frac m2}\Delta_0).$$ It then follows from Lemmas \ref{lm: exponentialsums} and \ref{lm:type of qua form} that $$N_{a,\lambda}(0)=p^{m-1}-\upsilon(\lambda)p^{\frac{m-2}{2}},$$ since $\eta((-1)^{\frac m2}\Delta_0)=-1$. If $\lambda=0$, then $N_{a,\lambda}(0)=p^{m-1}-(p-1)p^{\frac{m-2}{2}}$ occurring $|R_{0,-1}|$ times. If $\lambda\neq0$, then $N_{a,\lambda}(0)=p^{m-1}+p^{\frac{m-2}{2}}$ occurring $(p-1)|R_{0,-1}|$ times. \emph{Case 3:} $a\in R_1$. In this case, rank$(Q(x))=m-2d$. Again by Lemmas \ref{lm: solution of quadra form}, \ref{lm: exponentialsums} and \ref{lm:type of qua form}, we find \begin{eqnarray*}N_{a,\lambda}(0)&=&p^{2d}(p^{m-2d-1}+\upsilon(\lambda)p^{\frac{m-2d-2}{2}}\eta((-1)^{\frac {m-2d}2}\Delta_1)\\ &=&p^{m-1}+\upsilon(\lambda)p^{\frac{m+2d-2}{2}}, \end{eqnarray*}since $\eta((-1)^{\frac {m-2d}2}\Delta_1)=1$. If $\lambda=0$, then $N_{a,\lambda}(0)=p^{m-1}+(p-1)p^{\frac{m+2d-2}{2}}$ occurring $|R_{1,1}|$ times. If $\lambda\neq0$, then $N_{a,\lambda}(0)=p^{m-1}-p^{\frac{m+2d-2}{2}}$ occurring $(p-1)|R_{1,1}|$ times. By Equation \eqref{W2:1234} and the above analysis, we will obtain the result for case $v_2(m)=v_2(k)+1$ and $m\neq2k$ described in \eqref{W2:31 m not equal 2k}. Here we give the frequencies of the codewords with weight $p^{m}-1$ and $(p-1)(p^{m-1}+p^{\frac{m-2}2})$. Other cases can be analyzed in an analogous manner. The weight of $\mathsf{c}_2(a,\lambda)$ is equal to $p^{m}-1$ if and only if $N_{a,\lambda}(0)=0$ and $\lambda\neq0$. The above discussion shows that the frequency is $p-1.$ The weight of $\mathsf{c}_2(a,\lambda)$ is equal to $(p-1)(p^{m-1}+p^{\frac{m-2}2})$ if and only if $N_{a,\lambda}(0)=p^{m-1}-(p-1)p^{\frac {m-2}2}$ and $\lambda=0$. The frequency is $|R_{0,-1}|=\frac{p^d(p^m-1)}{p^d+1}.$ \textcircled{4}Let $\upsilon_2(m)>\upsilon_2(k)+1$. The value of $N_{a,\lambda}(0)$ will be calculated according to the choice of the parameter $a$. \emph{Case 1:} $a=0$. In this case, if $\lambda=0$ then $N_{a,\lambda}(0)=p^m$, and this value occurs only once, and if $\lambda \neq 0$ then $N_{a,\lambda}(0)=0$, and this value occurs $p-1$ times. \emph{Case 2:} $a\in R_0$. In this case, rank$(Q(x))=m$ and consequently every coefficient $a_i$ in \eqref{N2:0 2} is nonzero. It then follows from Lemma \ref{lm: solution of quadra form} that $$N_{a,\lambda}(0)=p^{m-1}+\upsilon(\lambda)p^{\frac{m-2}{2}}\eta((-1)^{\frac m2}\Delta_0).$$ Applying Lemmas \ref{lm: exponentialsums} and \ref{lm:type of qua form} yields that $$N_{a,\lambda}(0)=p^{m-1}+\upsilon(\lambda)p^{\frac{m-2}{2}},$$ since $\eta((-1)^{\frac m2}\Delta_0)=1$. If $\lambda=0$, then $N_{a,\lambda}(0)=p^{m-1}+(p-1)p^{\frac{m-2}{2}}$ occurring $|R_{0,1}|$ times. If $\lambda\neq0$, then $N_{a,\lambda}(0)=p^{m-1}-p^{\frac{m-2}{2}}$ occurring $(p-1)|R_{0,1}|$ times. \emph{Case 3:} $a\in R_1$. In this case, rank$(Q(x))=m-2d$. Again by Lemmas \ref{lm: solution of quadra form}, \ref{lm: exponentialsums} and \ref{lm:type of qua form}, we arrive at \begin{eqnarray*}N_{a,\lambda}(0)&=&p^{2d}(p^{m-2d-1}+\upsilon(\lambda)p^{\frac{m-2d-2}{2}}\eta((-1)^{\frac {m-2d}2}\Delta_1)\\ &=&p^{m-1}-\upsilon(\lambda)p^{\frac{m+2d-2}{2}}, \end{eqnarray*} since $\eta((-1)^{\frac {m-2d}2}\Delta_1)=-1$. If $\lambda=0$, then $N_{a,\lambda}(0)=p^{m-1}-(p-1)p^{\frac{m+2d-2}{2}}$ occurring $|R_{1,-1}|$ times. If $\lambda\neq0$, then $N_{a,\lambda}(0)=p^{m-1}+p^{\frac{m+2d-2}{2}}$ occurring $(p-1)|R_{1,-1}|$ times. By Equation \eqref{W2:1234} and the above analysis, we will derive the result for case $\upsilon_2(m)>\upsilon_2(k)+1$ described in \eqref{W2:4 v2(m)>v2(k)+1}. Here we only show the frequencies of the codewords with weight $p^{m}-1$ and $(p-1)(p^{m-1}-p^{\frac{m-2}2})$. Other cases are similarly verified. The weight of $\mathsf{c}_2(a,\lambda)$ is equal to $p^{m}-1$ if and only if $N_{a,\lambda}(0)=0$ and $\lambda\neq0$. From the above discussion, the frequency is $p-1.$ The weight of $\mathsf{c}_2(a,\lambda)$ is equal to $(p-1)(p^{m-1}-p^{\frac{m-2}2})$ if and only if $N_{a,\lambda}(0)=p^{m-1}+(p-1)p^{\frac {m-2}2}$ and $\lambda=0$. The frequency is $|R_{0,1}|=\frac{p^d(p^m-1)}{p^d+1}.$ This completes the proof of this theorem. \hfill\space$\qed$ \end{proof} \begin{corollary}\label{coro:C2} If $m=2k$, then $C_2$ is a cyclic code over $\mathbb{F}_p$ with parameters $[p^m-1,{m}/2+1]$ and the weight distribution is given as follows: \begin{eqnarray}\label{W2:32 m=2k} \left\{\begin{array}{l}A_0=1,\\A_{p^m-1}=p-1,\\ A_{(p-1)(p^{m-1}+p^{\frac{m-2}2})}=p^{\frac{m}{2}}-1,\\ A_{(p-1)p^{m-1}-p^{\frac{m-2}2}-1}=(p-1)(p^{\frac{m}{2}}-1). \end{array} \right. \end{eqnarray} \end{corollary} \begin{proof} Let $K=\{x\in\mathbb{F}_{p^m}\big|~x^{p^k}+x=0\}$. It is easily checked that $\mathsf{c}_2(a,\lambda)=\mathsf{c}_2(a+\delta,c)$ for any $\delta\in K$ and $\mathsf{c}_2(a,\lambda)\in C_2$. Hence, $C_2$ is degenerate with dimension ${m}/{2}+1$ over $\mathbb{F}_p$. Note that $|K|=p^{\frac{m}{2}}$ and in this case $\upsilon_2(m)=\upsilon_2(k)+1$. Substituting $d={m}/{2}$ to Equation \eqref{W2:31 m not equal 2k} and dividing each $A_i$ by $p^{\frac{m}{2}}$, we get the desired result. Now the proof of Corollary \ref{coro:C2} is complete. \hfill\space$\qed$\end{proof} The following are some examples for the code $C_2$. Note that the weight distribution of $C_2$ is not known before. \begin{example} Let $m=6,k=2,p=3$. This corresponds to the case $1\leqslant\upsilon_2(m)\leqslant\upsilon_2(k)$. Using Magma, $C_2$ is a [728, 7, 468] cyclic linear code over $\mathbb{F}_3$ with the weight distribution: $$A_0=1,A_{468}=364,A_{476}=728,A_{494}=728,A_{504}=364,A_{728}=2,$$ which confirms the result of Equation \eqref{W2:2 1<=v2(m)} in Theorem \ref{thm:code a c}. \end{example} \begin{example} Let $m=8,k=1,p=3$. This corresponds to the case $\upsilon_2(m)>\upsilon_2(k)+1$. Using Magma, $C_2$ is a [6560, 9, 4292] cyclic linear code over $\mathbb{F}_3$ with the weight distribution: \begin{eqnarray*} &&A_0=1,A_{4292}=3280,A_{4320}=4920,A_{4400}=9840,\\ &&A_{4536}=1640,A_{6560}=2, \end{eqnarray*} which confirms the result of Equation \eqref{W2:4 v2(m)>v2(k)+1} in Theorem \ref{thm:code a c} . \end{example} \begin{example} Let $m=6,k=3,p=3$. This corresponds to the case $m=2k$. Using Magma, $C_2$ is a [728, 4, 476] cyclic linear code over $\mathbb{F}_3$ with the weight distribution: $$A_0=1,A_{476}=52,A_{504}=26,A_{728}=2,$$ which confirms the result of Equation \eqref{W2:32 m=2k} in Corollary \ref{coro:C2}. \end{example} \section{Conclusion and remarks}\label{sec:conclusion} In this paper, we completely determined the weight distributions of two classes of cyclic codes $C_1$ for even $s$ and $C_2$ over $\mathbb{F}_p$. The result showed that they have only few weights. In addition, one can get the value distributions of the corresponding exponential sums of $C_1$ and $C_2$ by the method described in the proofs of Theorems \ref{thm:code a bx} and \ref{thm:code a c} though we did not list them here. We mention that the weight distributions of several other cyclic codes may be solved essentially, such as, a family of $p$-ary cyclic codes with parity-check polynomial $(x-1)h_1(x)h_2(x)$, where $h_1(x)$ and $h_2(x)$ are defined in Section \ref{sec:intro}. We leave this for future work. \begin{acknowledgements} The work of Zheng-An Yao is partially supported by the NNSFC (Grant No.11271381), the NNSFC (Grant No.11431015) and China 973 Program (Grant No. 2011CB808000). The work of Chang-An Zhao is partially supported by the NNSFC (Grant No. 61472457). \end{acknowledgements}
{ "timestamp": "2015-04-14T02:10:40", "yymm": "1504", "arxiv_id": "1504.03048", "language": "en", "url": "https://arxiv.org/abs/1504.03048" }
\section*{Introduction} Siegel [Sie] has shown that an affine curve with coefficients in a number field and of genus $\geq 1$ has only a finite number of points whose coordinates are integers of that field. Mahler has conjectured that a similar statement holds for points having only a finite number of primes in their denominators, and proved this for curves of genus one over the rationals by his p-adic analogue of the Thue-Siegel theorem in [Mah1] and [Mah2]. Mahler's conjecture is fully proved today. Mahler was a student in Frankfurt (1923-25) and Gottingen (1925-33) when he learned from Siegel about Thue's theorem and its improvements and generalisations. Emmy Noether introduced him to the theory of p-adic numbers. He combined these two ideas in 1931 when he found an analogue of the Thue-Siegel theorem that involved both real and p-adic algebraic numbers. In 1955, Roth obtained his theorem on the rational approximations of a real algebraic number. It was immediately clear to Mahler that his method should also work for p-adic algebraic numbers. Some interesting work of this kind was in fact carried out by D. Ridout, a student of Roth. The method of Thue-Siegel-Roth has one fundamental disadvantage, that of its non-effectiveness. The proof is entirely non-constructive, and by its very nature does not lead to any upper bounds for possible solutions. Only in some very special cases effective methods are known. Some are due to Skolem and Gelfond. In view of Roth's result, and the progress which had been made in the theory of abelian varieties (especially the Jacobian) since Siegel and Mahler's papers appeared, it seemed to Lang worth while to reconsider the question, which automatically carries with it a proof of Mahler's conjecture. The Jacobian is used in order to take a pull-back over the given curve of the standard covering given by $u\mapsto mu+a$ where m is a large integer, and $a$ is a suitable translation. Aside from Roth's theorem, Lang used only the classical properties of heights and the weak Mordell-Weil theorem. Mordell's conjecture which was proved now by Faltings, and then by Vojta and Bombieri, claiming that a curve of genus $\geq 2$ has only a finite number of rational points would of course supersede the Siegel-Mahler theorem for such curves, but Lang conjectured that the latter holds in fact for abelian varieties: If $A$ is an abelian variety defined over a number field $K$, if $U$ is an open affine subset, and $R$ a subring of $K$ of finite type over $\mathbb{Z}$, then there is only a finite number of points of $U$ in $R$. The difficulty in trying to extend the proof to abelian varieties lies in the fact that there is a whole divisor at infinity, whereas for curves, there is only a finite number of points, which are all algebraic. This prediction by Lang, shows that he expected the Siegel's theorem be an algebro-geometric fact rather than an arithmetic proposition. Lang [Lan] even proved a version of Roth's theorem in case of finitely generated fields of characteristic zero, but this is not strong enough to prove the geometric version of Siegel's theorem he had in mind. Lang used the theory of heights which he and Neron had already generated for finitely generated fields of characteristic zero [Lan-Ner]. In this paper, we prove the stronger version of Roth's theorem that Lang needed to improve Siegel's theorem. The idea is that, the maps of the form $u\mapsto mu+a$ on an abelian variety are height increasing if $m$ is an integer $\geq 2$. This works even for height defined by Lang and Neron for finitely generated fields of characteristic zero. We also use covering of a finitely generated group by images of finitely many maps of the above form. As a reward, we get a geometric improvement of Siegel's theorem. Here's the statement of our generalization: \begin{thm} Let $X$ be an affine open subcurve of a connected smooth projectuve curve of genus $\geq 1$ defined over $\mathbb{C}$ in the ambient affine space $\mathbb{A}^n(\mathbb{C})$ and let $F\subset \mathbb{A}^n(\mathbb{C})$ denote any finitely generated subgroup of $\mathbb{C}^n$. Then $X(K )\cap F$ is finite. \end{thm} This implies that Siegel's theorem is an algebro-geometric fact, not an arithmetic one. Lang had the same geometric expectation when he formulated his conjecture that a curve of genus $\geq 2$ in its Jacobian should intersect any finitely generated subgroup of the Jacobian in a finite set. He even conjectured that divisible group of any finitely generated subgroup of the Jacobian intersects the embedded curve in finitely many points. Getting such a result is beyound our reach. Since we have only access to a result for finitely generated subgroups defined over a finitely generated field. Therefore we state a conjecture following geometric philosophy of Lang as follows: \begin{conj} Let $X$ be an affine open subcurve of a connected smooth projectuve curve of genus $\geq 2$ defined over $\mathbb{C}$ in the ambient affine space $\mathbb{A}^n(\mathbb{C})$ and let $F\subset \mathbb{A}^n(\mathbb{C})$ denote any finitely generated subgroup of $\mathbb{C}^n$ and $Div(F)$ denote the divisible subgroup of $\mathbb{C}^n$ associated to $F$. Then $X(K )\cap Div(F)$ is finite. \end{conj} In case $F$ is defined over a number field, this is proved by Mahler [Mah3] for any algebraic curve as above defined over $\mathbb{C}$. Proving this conjecture would need a geometrization of Mahler's ideas over a finitely generated fields of characteristic zero. \section{Diophantine approximation by subgroups of $\mathbb{C}^n$} This section is devoted to proving theorems which were mentioned in the introduction. The arguments are along the same lines as analogous classical results. Roth's theorem on Diophantine approximation of rational points on projective line implies a version on projective varieties defined over number-fields. \begin{thm} (Improvement of Roth's theorem on diopphantine approximation) Fix a finitely generated field of characteristic zero $K$ and $\sigma :K\hookrightarrow \Cplx$ a complex embedding. Let $A$ be an abelian variety defined over $K$ and let $L$ be an very ample line-bundle on $A$. Denote the arithmetic height function associated to the line-bundle $L$ by $h_L$. Suppose $F\subset A(K)$ is a finitely generated subgroup. Fix a Riemannian metric on $A_{\sigma}(\Cplx)$ and let $d_{\sigma}$ denote the induced metric on $A_{\sigma}(\Cplx)$. Then, for every $\delta>0$ and every choice of an algebraic point $\alpha\in A(\bar {K})$ and all choices of a constant $C$, there are only finitely many points $\omega\in F$ approximating $\alpha$ such that $$ d_{\sigma}(\alpha ,\omega)\leq Ce^{-\delta h_L(\omega)}. $$ \end{thm} \begin{prop} With assumptions of the above theorem, suppose for some $\delta_0>0$ we have that, for any choice of a constant $C$ and every choice of an algebraic point $\alpha\in A(\bar {K})$ there are only finitely many points $\omega\in F$ approximating $\alpha$ in the following manner $$ d_{\sigma}(\alpha ,\omega)\leq Ce^{-\delta_0 h_L(\omega)}. $$ Then, for every $\delta>0$ and every choice of an algebraic point $\alpha\in A(\bar {K})$ and all choices of a constant $C$, there are only finitely many points $\omega\in F$ approximating $\alpha$ such that $$ d_{\sigma}(\alpha ,\omega)\leq Ce^{-\delta h_L(\omega)}. $$ \end{prop} \textbf{Proof (Proposition).} Note that, we have assumed that the above is true for some $\delta_0>0$ without any assumption on $\alpha$. Let $\delta'>0$ be infimum of such $\delta_0>0$. The subset $F$ is disjoint union of the images of finitely many height-increasing self-endomorphisms $\phi_i:A(K)\to A(K)$ defined over $K$ such that for all $i$ we have $$ h_L(\phi_i(f))=mh_L(f)+O(1) $$ where $m>1$. Take $\phi_i:A(K)\to A(K)$ to be of the form $u \mapsto mu+a_i$ where $a_i$ are representatives of the finite group quotient $F/mF$. Fix $\epsilon>0$ such that $\epsilon<\delta' <m\epsilon$. Suppose that $w_n$ is an infinite sequence of elements in $F$ such that $\omega_n\to \alpha$ which satisfies the estimate $$ d_{\sigma}(\alpha ,\omega_n)\leq Ce^{-\epsilon h_L(\omega_n)}. $$ then infinitely many of them are images of elements of $F$ under the same $\phi_i$. By going to a subsequence, one can find a sequence $\omega'_n$ in $F$ and an algebraic point $\alpha'$ in $A(\bar {K})$ such that $\omega'_n \to \alpha'$ and for a fixed $\phi_i$ we have $\phi_i(\alpha')=\alpha$ and $\phi_i(\omega'_n)=\omega_n$ for all $n$. Then $$ d_{\sigma}(\alpha ,\omega_n)\leq Ce^{-\epsilon h_L(\omega_n)}\leq C'e^{-\epsilon m_i h_L(\omega'_n)} $$ for an appropriate constant $C'$. On the other hand, $$ d_{\sigma}(\alpha' ,\omega'_n)\leq C''d_{\sigma}(\alpha ,\omega_n) $$ holds for an appropriate constant $C''$ and large $n$ by injectivity of $d\phi_i^{-1}$ on the tangent space of $\alpha$. This contradicts our assumption on $\delta'$, because $\delta' <m_i\epsilon$. $\square$ \\ \textbf{Proof (Theorem).} If we assume that points of $F$ and covering maps are defined over some number-field, Roth's theorem implies that the assumption of theorem is true for any $\delta_0>2$. The same is true for finitely generated fields of characteristic zero by a result of Lang [Lan] generalizing Roth's theorem.$\square$ \section{Geometric formulation of Siegel's theorem} Let us state a more precise version of our version of Siegel's theorem. \begin{thm}(Geometric version of Siegel's theorem on integral points) Fix a finitely generated field of characteristic zero $K$. Let $X$ be an affine open subcurve of a connected smooth projectuve curve of genus $\geq 1$ defined over $K$ in the ambient affine space $\mathbb{A}^n(K)$ and let $F\subset \mathbb{A}^n(K)$ denote any finitely generated subgroup of $K^n$. Then $X(K )\cap F$ is finite. \end{thm} We borrow a classical lemma [Ser] whose proof goes very similar to that reference. \begin{lem} Let $K$ be a finitely generated field of characteristic zero. Let $X$ be a curve defined over $K$. Assume genus of $X$ is $\geq 1$. If $P_n$ is a sequence of distince points in $X(K)$, which means that their heights tends to infinity and if $\phi$ defined over $K$ is a non-constant rational function on $X$. From some point on, no $P_n$ is pole of $\phi$. Then for $z_n=\phi(P_n)$ which a point of the projective space defined over $K$ we have $$ \lim_{n\to \infty} {{log|z_n|_{v}}\over {log H(z_n)}}=0 $$ \end{lem} \textbf {Proof.} Assume this is false. By taking a subsequence and replacing $\phi$ by $1/\phi$, we may suppose that $$ {{log|z_n|_{v}} \over {log H(z_n)}}\rightarrow \lambda $$ where $-\infty <\lambda< 0$. In particular, $z_n \rightarrow 0$ in $K_v$ and by taking a subsequence, we may assume that $P_n$ converges to a zero $P_0$ of $\phi$. As we are on a curve, $P_0$ is an algebraic point of X. Between $H(P_n )$, the height corresponding to a morphism $X \rightarrow \mathbb P_N$, and $H(z_n )$, corresponding to a morphism $X \rightarrow \mathbb P_1$, we have an inequality $$ H(z_n )\ll H(P_n )^{l} $$ for some positive $l$. On the other hand, if $e$ is the multiplicity of $P_0$ as a zero of $\phi$, we have $|z_n|_{v}\approx d_{v}(P_n, P_0)^{e}$. Therefore, there is $c > 0$ such that for sufficiently large $n$, $$ d_{v}(P_n, P_0)\leq 1/H(P_n )^c $$ which contradicts the approximation theorem.$\square$ \\ \textbf {Proof (Theorem).} Let $\sigma :K\hookrightarrow \Cplx$ denote a complex embedding of $K$. Fix a Riemannian metric on $Jac(X)_{\sigma}(\Cplx)$ and let $d_{\sigma}$ denote the induced metric on $Jac(X)_{\sigma}(\Cplx)$. Now, embed $X$ in $Jac(X)$. Then, by our version of Roth's theorem, for every $\delta>0$ and every choice of an algebraic point $\alpha\in X(\bar {K})$ and all choices of a constant $C$, there are only finitely many points $\omega\in F\cap X(K)$ approximating $\alpha$ such that $$ d_{\sigma}(\alpha ,\omega)\leq H_L(\omega)^{-\delta }. $$ where $log(H_L)=h_L$. In case $K$ is trancendental, we have to pick a model for $K$ over algebraic closure of $\mathbb{Q}$ in $K$ following Lang [Lan]. Now if $P_n$ is a sequence of distince points in $X(K)\cap F$, their heights tends to infinity and if $\phi$ is a non-constant rational function on $X$ from some point on no $P_n$ is pole of $\phi$. Then by above lemma $$ \lim_{n\to \infty} {{log|z_n|_{\sigma}}\over {log H(z_n)}}=0 $$ On the other hand, one defines height of points defined over $K$ by $$ H(z)=\prod_{v\in M_K} sup(1,|z|_v), $$ where $|.|_v$ are normalized according to a product formula. Since covering maps of $F$ are height expanding, we know that $F$ is forward orbit of finitely many points. So for a finite set of places $S$ we have $$ H(z)=\prod_{v\in S} sup(1,|z|_v), $$ and therefore $$ log H(z)=\sum_{v\in S} log(sup(1,|z|_v)). $$ Then, we have $$ 1=\sum_{v\in S} sup(0,{{log|z_n|_{\sigma}}\over {log H(z_n)}})\leq \sum_{v\in S} {{log|z_n|_{\sigma}}\over {log H(z_n)}} $$ which could not be true, because the above limit is zero. This implies the finiteness result we are seeking for. $\square$ \subsection*{acknowledgements} I have benefited from conversations with M. Hadian, A. Rajaei, P. Sarnak, N. Talebizadeh, for which I am thankful. Peter Sarnak particularly gave crucial comments which led to the final version of the paper. I would also like to thank Sharif University of Technology for finantial support and Princeton University for warm hospitality.
{ "timestamp": "2015-04-14T02:14:21", "yymm": "1504", "arxiv_id": "1504.03162", "language": "en", "url": "https://arxiv.org/abs/1504.03162" }
\section*{Introduction} Historically diophantine approximation has been a strong method for proving finiteness results in diophantine geometry. In the case of the projective line the fundamental result is Roth's theorem. In this paper, we shall generalize this to geometrically irreducible subvarieties of abelian varieties. We fix a finitely generated field $K$ of characteristic zero and chose a geometrically irreducible subvariety $E\subset \mathbb{P}^n$ defined over $K$ and study the distance $d_v (E; x)$ of a $K$-rational point of $\mathbb{P}^n-E$ to $E$ where $v$ is a place of $K$. It is natural to ask for an estimate of the type $$ d_v(E; x) \gg H(x)^{-\delta } $$ such that the positive exponent $\delta$ is as small as possible. Here $H(x)$ denotes the absolute Well height of $x$. Faltings and Wustholz showed that the above assertion is trivial for subvarieties $E$ which are geometrically irreducible. \begin{thm} Let $A$ be an abelian variety defined over a finitely generated subfield $K$ of $\mathbb{C}$. Let $E$ is a geometrically irreducible subvariety of $A$ defined over $K$ and $F$ be a finitely generated subgroup of $A(K)$. Let $v$ be a valuation on $K$ and $H(x)$ a height function on $K$ coming from a choice of projective model for $K$ over the algebraic closure of $\mathbb{Q}$ in $K$. If $d_v(x,E)$ denotes the $v$-adic distance from $x$ to $E$, and $\delta$ and $c$ are positive constants, then, there are only finitely many points in $F$ satisfying the following inequality $$ d_v(x,E)< cH(x)^{-\delta }. $$ \end{thm} This implies that Faltings' theorem on Diophantine approximation on abelian varieties is an algebro-geometric fact, not an arithmetic one. Here is a geometric implication in the case of affine subsets of abelian varieties. \begin{thm} Let $A$ be an abelian variety defined over $\mathbb{C}$ and let $U$ be an open affine subset of $A$. Suppose $F\subset \mathbb{C}^n$ is a finitely generated subgroup. Then $F\cap U(\mathbb{C})$ is a finite set. \end{thm} This extends Lang's conjecture on integral points of affine subvarieties of abelian varieties. Lang had the same geometric expectation when he formulated his conjecture that a curve of genus $\geq 2$ in its Jacobian should intersect any finitely generated subgroup of the Jacobian in a finite set. He even conjectured that divisible group of any finitely generated subgroup of the Jacobian intersects the embedded curve in finitely many points. Getting such a result is beyound our reach. Since we have only access to a result for finitely generated subgroups defined over a finitely generated field. Therefore we state a conjecture following geometric philosophy of Lang as follows: \begin{conj} Let $A$ be an abelian variety defined over $\mathbb{C}$ and let $U$ be an open affine subset of $A$. Suppose $F\subset \mathbb{C}^n$ is a finitely generated subgroup. Let $Div(F)$ denote the divisible subgroup of $\mathbb{C}^n$ associated to $F$. Then $Div(F) \cap U(\mathbb{C})$ is a finite set. \end{conj} \section{Diophantine approximation by subgroups of $\mathbb{C}^n$} This section is devoted to proving theorems whichhas motivated us to extend Faltins' result. This section is borrowed from [Ras1]. The arguments are along the same lines as analogous classical results. Roth's theorem on Diophantine approximation of rational points on projective line implies a version on projective varieties defined over number-fields. \begin{thm} (Improvement of Roth's theorem on diopphantine approximation) Fix a finitely generated field of characteristic zero $K$ and $\sigma :K\hookrightarrow \Cplx$ a complex embedding. Let $A$ be an abelian variety defined over $K$ and let $L$ be an very ample line-bundle on $A$. Denote the arithmetic height function associated to the line-bundle $L$ by $h_L$. Suppose $F\subset A(K)$ is a finitely generated subgroup. Fix a Riemannian metric on $A_{\sigma}(\Cplx)$ and let $d_{\sigma}$ denote the induced metric on $A_{\sigma}(\Cplx)$. Then, for every $\delta>0$ and every choice of an algebraic point $\alpha\in A(\bar {K})$ and all choices of a constant $C$, there are only finitely many points $\omega\in F$ approximating $\alpha$ such that $$ d_{\sigma}(\alpha ,\omega)\leq Ce^{-\delta h_L(\omega)}. $$ \end{thm} \begin{prop} With assumptions of the above theorem, suppose for some $\delta_0>0$ we have that, for any choice of a constant $C$ and every choice of an algebraic point $\alpha\in A(\bar {K})$ there are only finitely many points $\omega\in F$ approximating $\alpha$ in the following manner $$ d_{\sigma}(\alpha ,\omega)\leq Ce^{-\delta_0 h_L(\omega)}. $$ Then, for every $\delta>0$ and every choice of an algebraic point $\alpha\in A(\bar {K})$ and all choices of a constant $C$, there are only finitely many points $\omega\in F$ approximating $\alpha$ such that $$ d_{\sigma}(\alpha ,\omega)\leq Ce^{-\delta h_L(\omega)}. $$ \end{prop} \textbf{Proof (Proposition).} Note that, we have assumed that the above is true for some $\delta_0>0$ without any assumption on $\alpha$. Let $\delta'>0$ be infimum of such $\delta_0>0$. The subset $F$ is disjoint union of the images of finitely many height-increasing self-endomorphisms $\phi_i:A(K)\to A(K)$ defined over $K$ such that for all $i$ we have $$ h_L(\phi_i(f))=mh_L(f)+O(1) $$ where $m>1$. Take $\phi_i:A(K)\to A(K)$ to be of the form $u \mapsto mu+a_i$ where $a_i$ are representatives of the finite group quotient $F/mF$. Fix $\epsilon>0$ such that $\epsilon<\delta' <m\epsilon$. Suppose that $w_n$ is an infinite sequence of elements in $F$ such that $\omega_n\to \alpha$ which satisfies the estimate $$ d_{\sigma}(\alpha ,\omega_n)\leq Ce^{-\epsilon h_L(\omega_n)}. $$ then infinitely many of them are images of elements of $F$ under the same $\phi_i$. By going to a subsequence, one can find a sequence $\omega'_n$ in $F$ and an algebraic point $\alpha'$ in $A(\bar {K})$ such that $\omega'_n \to \alpha'$ and for a fixed $\phi_i$ we have $\phi_i(\alpha')=\alpha$ and $\phi_i(\omega'_n)=\omega_n$ for all $n$. Then $$ d_{\sigma}(\alpha ,\omega_n)\leq Ce^{-\epsilon h_L(\omega_n)}\leq C'e^{-\epsilon m_i h_L(\omega'_n)} $$ for an appropriate constant $C'$. On the other hand, $$ d_{\sigma}(\alpha' ,\omega'_n)\leq C''d_{\sigma}(\alpha ,\omega_n) $$ holds for an appropriate constant $C''$ and large $n$ by injectivity of $d\phi_i^{-1}$ on the tangent space of $\alpha$. This contradicts our assumption on $\delta'$, because $\delta' <m_i\epsilon$. $\square$ \\ \textbf{Proof (Theorem).} If we assume that points of $F$ and covering maps are defined over some number-field, Roth's theorem implies that the assumption of theorem is true for any $\delta_0>2$. The same is true for finitely generated fields of characteristic zero by a result of Lang [Lan] generalizing Roth's theorem.$\square$ \section{Diophantine approximation on abelian varieties} The version of Roth's theorem in the last section being true, we expect the following version of Liouville's theorem on diophantine approximation to hold: \begin{thm}(Weak version of Vojta conjectures on diophantine approximation) Fix a finitely generated field of characteristic zero $K$. Let $V$ be a smooth projective algebraic variety defined over $K$ and let $L$ be an very ample line-bundle on $V$. Denote the arithmetic height function associated to the line-bundle $L$ by $h_L$. Then, there exists a positive constant $\delta_0$ such that for any positive constant $c$ For any geometrically irreducible algebraic subvariety $E$ of $V$ defined over $K$ and $d_v(x,E)$ denoting the $v$-adic distance from $x$ to $E$, there are only finitely many points defined over $K$ in $V(K)$ outside $E$ satisfying the following inequality $$ d_v(x,E)< cH(x)^{-\delta_0} $$ except for points in an algebraic variety $V(\delta_0 )$ which is of strictly smaller dimension of $V$. In case $K$ is trancendental, we have to pick a model for $K$ over algebraic closure of $\mathbb{Q}$ in $K$ following Lang [Lan] and [Lan-Ner] to fix a height function. \end{thm} \textbf {Proof.} This is a weak form of Vojta conjectures. In the number field case, this is mentioned in Faltings-Wustholz [Fa-Wu] as a trivial result in case $E$ is geometrically irreducible. In the case of finitely generated fields of characteristic zero the result is a consequence of theorem I' in seminal work of Lang [Lan].$\square$ Now, the following version of Falting's theorem, can be proved using the method of height expansion. \begin{thm}(Geometric formulation of Faltings' theorem on diophantine approximation on abelian varieties) Fix a finitely generated field of characteristic zero $K$. Let $A$ be an abelian variety defined over $K$ and let $L$ be an very ample line-bundle on $A$. Denote the arithmetic height function associated to the line-bundle $L$ by $h_L$. Suppose $F\subset A(K)$ is a finitely generated subgroup Fix any positive constants $\delta$ and $c$. For any irreducible algebraic subvariety $E$ of $A$ defined over $K$ and $d_v(x,E)$ denoting the $v$-adic distance from $x$ to $E$, there are only finitely many points defined over $K$ in $F$ outside $E$ satisfying the following inequality $$ d_v(x,E)< cH(x)^{-\delta} $$ Again, in case $K$ is trancendental, we have to pick a model for $K$ over algebraic closure of $\mathbb{Q}$ in $K$ following Lang [Lan] and [Lan-Ner] to fix a height function. \end{thm} \textbf {Proof.} We have that the above is true for some $\delta_0>0$ without any assumption on $\alpha$. Let $\delta'>0$ be infimum of such $\delta_0>0$. Note that a finitely generated subgroup $F\subset A(K)$ is a subset which is a union of its images under finitely many height-increasing polynomial finite self-endomorphisms $\phi_i:A\to A$ defined over $K$ such that for all $i$ we have $$ h_L(\phi_i(u)) > mh_L(u)+0(1) $$ where $m>1$. Let $\phi_i$ be of the form $u\mapsto mu+a_i$ where $a_i$ are representatives of $F/mF$ for $m>1$. Fix $\epsilon>0$ such that $\epsilon<\delta' <m\epsilon$. Suppose that $P_n$ is an infinite sequence of elements in $F$ such that $P_n\to E$ which satisfies the estimate $$ d_{\sigma}(E ,P_n)\leq Ce^{-\epsilon h_L(P_n)}. $$ then infinitely many of them are images of elements of $F$ under the same $\phi_i$. By going to a subsequence, one can find a sequence $P'_n$ in $F$ and an irreducible component of inverse image of $E$ under $\phi_i$ denoted by $E'$ in $A(\bar {K})$ such that $P'_n \to E'$ and for a fixed $\phi_i$ we have $\phi_i(E')=E$ and $\phi_i(P'_n)=P_n$ for all $n$. Then $$ d_{\sigma}(E ,P_n)\leq Ce^{-\epsilon h_L(P_n)}\leq C'e^{-\epsilon m h_L(P'_n)} $$ for an appropriate constant $C'$. On the other hand, $$ d_{\sigma}(E' ,P'_n)\leq C''d_{\sigma}(E ,P_n) $$ holds for an appropriate constant $C''$ and large $n$ by injectivity of $d\phi_i^{-1}$ on the tangent space of $E$. This contradicts our assumption on $\delta'$, because $\delta' <m\epsilon$. So, reduction of the inequalty for some positive $\delta_0$ to arbitrary $\delta >0$ is done in the same manner as in our version of Roth's theorem. $\phi_i$ are finite maps and and if $E$ is approximated by infinitely many points $P_n$, then a subsequence are in the image of some $\phi_i$ and a therefore an infinite subsequence of its inverse images approximate an irreducible component of inverse image of $E$ under $\phi_i$. Getting rid of $A(\delta )$ is the result of the fact that $A(\delta )$ is invariant under $\phi_i$ and therefore a union of abelian subvarieties and $F\cap A(\delta )$ is again a translation of a finitely generated subgroup. One can proceed by reducing the problem from $A$ and $E$ to $A(\delta )$ and $E\cap A(\delta )$ and applying induction, until $A(\delta )$ is union of finitely many points.$\square$ The following will be an important implication: \begin{thm}(Geometric formulation of Lang's conjecture on diophantine approximation on affine subsets of abelian varieties by integral points) Fix a finitely generated field of characteristic zero $K$. Let $A$ be an abelian variety defined over $K$ and let $L$ be an very ample line-bundle on $A$. Denote the arithmetic height function associated to the line-bundle $L$ by $h_L$. Let $U$ be an open affine subset of $A$. Suppose $F\subset K^n$ is a finitely generated subgroup. Then $F\cap U(K)$ is a finite set. \end{thm} \textbf {Sketch of Proof.} Assume that $l$ is an equation for $E$. The height $H(x)$, for $x \in A - E$ an integral point, is essentially the inverse of the product of the $v$-norms of $l(x)$, $v$ running through the infinite places of $K$. Our proof actually gives that all these are bounded below by multiples of $H(x)^{-\delta }$, so that $H(x)$ must be bounded.$\square$ \subsection*{acknowledgements} I have benefited from conversations with M. Hadian, A. Rajaei, P. Sarnak, N. Talebizadeh, for which I am thankful. Peter Sarnak particularly gave crucial comments which led to the final version of the paper. I would also like to thank Sharif University of Technology for finantial support and Princeton University for warm hospitality.
{ "timestamp": "2015-04-15T02:01:43", "yymm": "1504", "arxiv_id": "1504.03367", "language": "en", "url": "https://arxiv.org/abs/1504.03367" }
\section{Addendum} In this addendum, we argue an unconditional generalization of the Jacobian criteria for arithmetic functions. The point being overlooked in the original article is that even though there may be derivations of $\cA$ that are not continuous with respect to the norm topology, as we remarked earlier, every derivation of $\cA$ is continuous with respect to the $\mf$-adic topology. So our original proofs will go through once we can demonstrate that convergence for power series behave the same way in these topologies. Before we do so, it is perhaps worth mentioning that for $\cA$ the $\mf$-adic topology is strictly finer than the norm topology. This can be seen from the fact that elements of $\mf^n$ have order at least $2^n$. Thus any sequence of elements of $\cA$ that converges in the $\mf$-adic topology also converges in the norm topology. The converse is not true as we pointed out (consider the sequence $(e_p)_{p \in \Pp}$)in the previous section. The situation is different for power series. Recall that for any $g \in \mf$ and any formal power series $\sum_{k=0}^\infty \alpha_kX^k$ over $\Cc$, the sequence $\sum_{k=0}^m \alpha_k g^k$ $(m \in \Nn)$ converges in the norm topology to a unique element of $\cA$. We denote it by $\sum_{k=0}^\infty \alpha_k g^k$. In particular, for any $N \in \Nn$, $\sum_{k=0}^\infty \alpha_{k+N}g^k$ is also a well-defined element of $\cA$. Since the convolution production is continuous with respect to the norm topology, \begin{align*} \sum_{k=0}^\infty \alpha_k g^k - \sum_{k=0}^{N-1} \alpha_kg^k = \sum_{k=N}^\infty \alpha_k g^k = g^{N}*\left(\sum_{k=0}^\infty \alpha_{k+N}g^k \right) \end{align*} is an element of $\mf^N$. Thus, we conclude that \begin{prop} Let $g \in \mf$ and $\sum \alpha_k X^k$ a formal power series over $\Cc$, the sequence $\sum_{k=0}^m \alpha_k g^k$ $(m \in \Nn)$ converges to $\sum_{k=0}^\infty \alpha_k g^k$ $\mf$-adically. \label{p:m-conv} \end{prop} This proposition together with the fact that $\cA$ is $\mf$-adically separated, i.e. $\bigcap_{n \in \Nn} \mf^n=\{0\}$, imply that Lemma~\ref{l:contd} holds for any derivation of $\cA$ (same proof but with the norm topology replaced by the $\mf$-adic topology). From this it follows that the assumption on continuity with respect to the norm topology of the derivations involved in all our results, including Ax's Theorem (Theorem~\ref{th:sa}) for $\cA$ and the Jacobian criterion for $\cA$ (Theorem~\ref{th:gJac}), can be removed. \section{Introduction} \label{sec:Axthm} Schanuel Conjecture asserts that for any $\Qq$-linearly independent complex numbers $a_1,\dots, a_n$ there are at least $n$ numbers among \begin{align*} a_1, \dots,a_n, \exp(a_1),\dots, \exp(a_n) \end{align*} that are algebraically independent over the rational numbers. It is well-known that a number of remarkable results about transcendental numbers: Lindemann-Weierstrass Theorem, Gelfond-Schneider Theorem and Baker's Theorem to name a few are consequences of this statement. For the state of the art on this topic, we refer the reader to Waldschmidt's paper~\cite{miw}. In this article, we argue that Schanuel's insight remains valid for arithmetic functions. We improve several existing results on algebraic independence of arithmetic functions by applying an analog of Schanuel Conjecture for differential rings. More precisely, we deduce them from the following theorem of James Ax~\cite[Theorem~3]{ax}: \begin{thm} \label{th:genax} Let $F/C/\Qq$ be a tower of fields. Suppose $\Delta$ is a set of derivations of $F$ with $\bigcap_{D \in \Delta} \ker D = C$. Let $y_1,\dots, y_n$, $z_1,\dots, z_n \in F^{\times}$ be such that \begin{enumerate} \item[\upshape{(a)}] \label{i:ex-log} for all $D \in \Delta$, $i = 1,\dots,n$, $Dy_i= Dz_i/z_i$ and either \item[\upshape{(b)}] \label{i:nontrivial} no non-trivial power product of the $z_j$ is in $C$, or \item[\upshape{(b$^\prime$)}] \label{i:modulo} the $y_i$ are $\Qq$-linearly independent modulo $C$. \end{enumerate} Then \begin{align*} \td_C C(y_1,\dots, y_n, z_1,\dots, z_n) \ge n + \rk\left( Dy_i \right)_{\begin{smallmatrix} & D \in \Delta \\ &1 \le i \le n \end{smallmatrix}}. \end{align*} \end{thm} A word about terminology. Let $G$ be an abelian group (written multiplicatively). We say that $g_1,\ldots, g_n \in G$ are (or the family $g_1,\ldots, g_n$ is) {\bf multiplicatively independent} if the equation $g_1^{k_1}\cdots g_n^{k_n}=1$ implies the integers $k_1,\ldots, k_n$ are all zeros. The implication is vacuously true for the empty family and so it is multiplicatively independent. A subset $X$ of $G$ is multiplicatively independent if every finite family of $X$ is. For $H$ a subgroup of $G$, we say that $X$ is {\bf multiplicatively independent modulo $H$}, if the image of $X$ in the quotient group $G/H$ is multiplicatively independent. We will use these terminologies throughout and first, let us restate Condition~(b) in Theorem~\ref{th:genax} as ``the $z_i$ are multiplicatively independent modulo $C^{\times}$''. We prefer doing so because that draws a closer analogy between Condition~(b) and~(b'). \section{Arithmetic Functions} \label{sec:arithfun} In this section we introduce the notations and summarize the facts about arithmetic functions that we will use subsequently. The reader can consult~\cite[Chapter~2]{ant} and~\cite[Chapter~4]{hint} for more information. We use $\Pp$ to denote the set of primes and $p$ will always stand for a prime in this article. {\bf Arithmetic functions} are complex-valued functions with domain the set of natural numbers. It is beneficial at times to think of them as functions on $\Rr$ vanishing at points that are not natural numbers. Arithmetic functions form a commutative ring $\cA$ under pointwise addition of functions $+$ and convolution product $*$ defined as: \begin{align*} (f * g)(n) = \sum_{d|n} f(d)g\left( \frac{n}{d} \right). \end{align*} Identifying $\alpha \in \Cc$ with the function $1 \mapsto \alpha, n \mapsto 0\ (n >1)$ turns $\cA$ into a $\Cc$-algebra. Under this identification $0$ and $1$ become the neutral elements for $+$ and $*$, respectively. For $A \subseteq \Nn$, we use $\vec{1}_{A}$ to denote the {\bf indicator function of $A$}, i.e. $\vec{1}_{A}(k) = 1$ if $k \in A$; and $\vec{1}_A(k) = 0$ otherwise. We write $\vec{1}$ for $\vec{1}_{\Nn}$, $\vec{1}_p$ for $\vec{1}_{\{p^k \colon k \ge 0\}}$ and $e_n$ for $\vec{1}_{\{n\}}$ ($n \in \Nn$). Since most of the time we will consider the convolution product, we often simply write $fg$ for $f*g$ and $f^k$ ($k \in \Nn$) for the $k$-th power of $f$ with respect to the convolution product. For a nonzero arithmetic function $f$, $f^0$ is understood to be $1$. Unless otherwise stated, by $\cA$ we mean the $\Cc$-algebra $(\cA,+,*)$. However, we do also consider the structure $(\cA, +, \cdot$) where $\cdot$ is the pointwise multiplication of functions. This structure is also a $\Cc$-algebra but this time $\alpha \in \Cc$ is identified with the constant function $n \mapsto \alpha$ $(n \ge 1)$. For $k \in \Nn$, let $\varepsilon_k$ be the {\bf $k$-th coordinate map}, i.e. $\varepsilon_k(f) = f(k)$ ($f \in \cA$). Among the coordinate maps only $\varepsilon := \varepsilon_1$ is a $\Cc$-algebra homomorphism from $\cA$ to $\Cc$. For $X \subseteq \Cc$, let \begin{align*} \cA_{X} = \varepsilon^{-1}(X) = \{f \in \cA \colon f(1) \in X\}. \end{align*} We write $\cA_{\alpha}$ for $\cA_{\{\alpha\}}$. One sees that $\cA_0$ is the unique maximal ideal of $\cA$ by checking that its complement is the group of units of $\cA$. The {\bf support} of an arithmetic function $f$, denoted by $\supp f$, is the set of natural numbers $n$ such that $f(n) \neq 0$. The {\bf order} of $f$, denoted by $v(f)$, is the least element of its support if $f \neq 0$ and is $\infty$ if $f = 0$. A {\bf prime divisor} of a set of natural numbers $A$ is a prime that divides some member of $A$. Following the notation in~\cite{aids}, we use $[A]$ to denote the set of prime divisors of $A$. We say that $A$ is {\bf (multiplicatively) finitely generated} if $[A]$ is finite. We use $\cT$ and $\cS$ to denote the subalgebras of $\cA$ consisting of arithmetic functions with finite support and finitely generated support, respectively. Note that $\cT$ is the $\Cc$-subalgebra of $\cS$ generated by the $e_n$ ($n \in \Nn$). \begin{lem} \label{l:prod_order} Let $f_1,\ldots, f_n \in \cA$ and $a_1,\ldots a_n$ be real numbers such that $0 < a_i \le v(f_i)$ for each $1 \le i \le n$. Then \begin{align} \label{eq:eval} (f_1*\cdots *f_n)\left( \prod_{i=1}^n a_i \right) = \prod_{i=1}^n f_i(a_i). \end{align} \end{lem} \begin{proof} First, if some $f_i = 0$, then both sides of~\eqref{eq:eval} are 0. So let us assume the order of each $f_i$ is finite. For $a \in \Rr$, we have \begin{align} \label{eq:sum} (f_1*\cdots * f_n)(a) = \sum_{\begin{smallmatrix} d_1\cdots d_n=a \\ d_i \in \Nn \end{smallmatrix}} f_1(d_1)\cdots f_n(d_n). \end{align} The summand $f_1(d_1)\cdots f_n(d_n)$ appears in~\eqref{eq:sum} can be nonzero only if $d_i \ge v(f_i) (\ge a_i)$ for each $i$. So by taking $a = a_1\cdots a_n$, we see that $f_1(d_1)\cdots f_n(d_n) \neq 0$ if and only if $d_i = v(f_i) = a_i$ for each $i$. Thus either $a_i < v(f_i)$ for some $i$, in that case both sides of~\eqref{eq:eval} are zero, or else $a_i = v(f_i)$ for each $i$, in that case both sides of~\eqref{eq:eval} equal $f_1(v(f_1))\cdots f_n(v(f_n))$. \end{proof} \begin{prop} \label{p:det_val} Let $f_{ij}$ be arithmetic functions $(1 \le i,j \le n)$. Suppose $a_i,b_i$ $(1 \le i \le n)$ are positive real numbers such that $a_ib_j \le v(f_{ij})$ for $1 \le i,j\le n$. Then \begin{align} \label{eq:det_val} \det\left(f_{ij}\right)\left( \prod_{k=1}^n a_kb_k \right) = \det\left( f_{ij}(a_ib_j) \right). \end{align} \end{prop} \begin{proof} For each permutation $\xi$ of $\{1,\dots, n\}$, by Lemma~\ref{l:prod_order} we have \begin{align*} \left(\sgn(\xi)\prod_{k=1}^n f_{k\xi(k)} \right)\left( \prod_{k=1}^n a_kb_k \right) &= \left(\sgn(\xi)\prod_{k=1}^n f_{k\xi(k)} \right)\left( \prod_{k=1}^n a_kb_{\xi(k)} \right)\\ &=\sgn(\xi)\prod_{k=1}^n f_{k\xi(k)}(a_kb_{\xi(k)}). \end{align*} Equation~\eqref{eq:det_val} now follows by summing through the permutations. \end{proof} Let $\norm{f}$ denote the reciprocal of $v(f)$ with the convention $1/\infty =0$. The assignment $f \mapsto \norm{f}$ is a non-archimedean norm on $\cA$. In particular, $\norm{f*g} = \norm{f}\norm{g}$ and consequently $\cA$ is an integral domain. The ring operations of $\cA$ are continuous with respect to the (ultra)-metric induced by this norm. A sequence $(f_n)$ of arithmetic functions converges to an arithmetic function $f$, written as $f_n \to f$, if and only if the sequence of rational numbers $(\norm{f_n -f})_n$ converges to $0$. Note also that a map from $\cA$ to itself is continuous if and only if it preserves convergence of sequences. Since the norm under consideration is non-archimedean, the series $\sum_k^\infty f_k$ converges if and only if $f_k \to 0$. In particular, for any formal power series $\sum \alpha_k X^k$ over $\Cc$ and $g \in \cA$, the series $\sum \alpha_k g^k$ converges if and only if $\norm{g} < 1$ or equivalently $g \in \cA_0$. The map defined by \begin{align*} f \longmapsto \Exp(f) = \sum_{k =0}^\infty \frac{f^k}{k!} \end{align*} is a continuous isomorphism of groups from $(\cA_0, +)$ to $(\cA_1, *)$~\cite[Theorem~2.20]{ant}. We extend it to the {\bf exponential map} on $\cA$ by, \begin{align*} f \longmapsto \exp(f(1))*\Exp(f-f(1)) \end{align*} where $\exp$ denotes the exponential map of $\Cc$. This extension is still a continuous group homomorphism from $(\cA,+)$ to $(\cA^{\times}, *)$ but no longer injective since it extends the complex exponentiation. However, its restriction to $\cA_{\Rr}$, as shown by Rearick in~\cite{rearick}, is indeed a continuous group isomorphism from $(\cA_{\Rr},+)$ to $(\cA_+,*)$ where $\cA_+$ is the inverse image of the set of positive reals under $\varepsilon$. The inverse of this group isomorphism, known as the {\bf Rearick logarithm}, is also continuous and we denote it by $\Log$. For convenience, we understand $\Exp^0 = \Log^0$ as the identity map of $\cA$; and for $k \ge 1$, $\Exp^{-k} = \Log^k$. For any $f \in \cA$, there is a largest $k \ge 0$ such that $\Log^k f$ is defined: $k=0$ if $f \notin \cA_{+}$, otherwise $k \ge 1$ is the integer such that $\log^{k}(f(1)) \le 0$ (here $\log$ is the real logarithm). For a nonempty $W \subseteq \cA$, let $k_W$ be the largest non-negative integer, such that $\Log^{k_W} f$ is defined for each $f \in W$. We write $\Exp^* W$ for the set \begin{align*} \{\Exp^m f \colon f \in W, m \ge -k_W\}. \end{align*} The ring of arithmetic functions is isomorphic, as a $\Cc$-algebra, to the ring of formal Dirichlet series~\cite[\S~4.6]{hint} via \begin{align} \label{eq:isods} f \longleftrightarrow F(s) = \sum_{n \in \Nn} \frac{f(n)}{n^s}. \end{align} Under this isomorphism $\vec{1}$ is identified with $\sum 1/n^s$ the Dirichlet series of the Riemann zeta function $\zeta(s)$. In general, for $A \subseteq \Nn$, $\vec{1}_A$ is identified with the Dirichlet series $\sum_{n \in \Nn} \vec{1}_A(n)/n^s$ which converges on a proper right half-plane and extends to a meromorphic function on $\Cc$. We call this function the {\bf zeta function of $A$} and denote it by $\zeta_A(s)$. The ring of arithmetic functions is also isomorphic, as a $\Cc$-algebra, to the formal power series ring over $\Cc$ in countably many variables $t_p$ ($p \in \Pp$) via \begin{align} \label{eq:isops} f \longleftrightarrow F(\vec{t}) = \sum_{n \in \Nn} f(n)\prod_p t_p^{v_p(n)}, \end{align} where $v_p(m)$ is the exponent of $p$ in the prime factorization of $m$. Under this isomorphism $e_p$ is mapped to the variable $t_p$. The isomorphism in~\eqref{eq:isops} was utilized by Cashwell and Everett in~\cite{rntf} to show that $\cA$ is a unique factorization domain. By a {\bf derivation} of $\cA$ we mean a $\Cc$-linear map from $\cA$ to itself satisfying the Leibniz rule: $D(f*g) = Df*g + f*Dg$. For simplicity, we do not distinguish by notation a derivation of $\cA$ and its unique extension to, $\cF$, the field of fractions of $\cA$. Let $\Delta$ be a set of derivations of $\cA$. By the {\bf kernel} of $\Delta$, written as $\ker \Delta$, we mean the intersection of the kernels of its members. By $\ker_{\cF} \Delta$ we mean the same but regard the members of $\Delta$ as derivations of $\cF$. In particular, $\ker \emptyset$ and $\ker_{\cF} \emptyset$ are $\cA$ and $\cF$, respectively. It is routine to check that $\ker_{\cF} \Delta$ is a subfield of $\cF$ extending $\Cc$ whose intersection with $\cA$ is $\ker \Delta$. The {\bf log-derivation} of $\cA$, denoted by $\partial_L$, is the map sending $f \in \cA$ to the arithmetic function defined by \begin{align*} (\partial_L f)(n) = \log(n)f(n). \end{align*} Under the isomorphism in~\eqref{eq:isods} $\partial_L$ corresponds to the derivation $-d/ds$. For each prime $p$, the {\bf $p$-basic derivation} of $\cA$, denoted by $\partial_p$, is the map sending $f \in \cA$ to the arithmetic function defined by \begin{align*} (\partial_p f)(n) = f(np)v_p(np). \end{align*} Under the isomorphism in~\eqref{eq:isops} $\partial_p$ corresponds to $\partial/\partial t_p$ the partial derivation with respect to $t_p$. A derivation of $\cA$ is {\bf basic} if it is $\partial_p$ for some $p$. The kernel of $\partial_L$ is $\Cc$ and the kernel of $\partial_p$ consists of arithmetic functions that vanish on the multiples of $p$. In other words, \begin{align} \label{eq:supp_ker} f \in \ker \partial_p \iff p \notin [\supp f]. \end{align} Thus the kernel of the set of basic derivations is also $\Cc$. Basic derivations and the log-derivation are continuous. For a nice characterization of continuous derivations of $\cA$, we refer the reader to~\cite[Section~4]{conv}. We consider continuous derivations because the derivative of a power series with respect to a continuous derivation can be computed term-by-term: \begin{lem} \label{l:contd} Suppose $D$ is a continuous derivation of $\cA$ and $g \in \cA_0$. Then for any formal power series $\sum_{k=0}^{\infty} \alpha_k X^k$ over $\Cc$, \begin{align*} D\left( \sum_{k=0}^\infty \alpha_k g^k \right) = \left(\sum_{k=1}^{\infty} ka_kg^{k-1}\right)* Dg. \end{align*} \end{lem} \begin{proof} Since $D$ is $\Cc$-linear and satisfies the Leibniz rule, for each $n \in \Nn$, \begin{align} \label{eq:partial_sum} D\left( \sum_{k=0}^n \alpha_k g^k \right) = \left(\sum_{k=1}^n k\alpha_k g^{k-1}\right)*Dg. \end{align} The left-side of~\eqref{eq:partial_sum} converges to $D\left( \sum_{k=0}^\infty \alpha_kg^k \right)$ by continuity of $D$. Since $g \in \cA_0$ and the convolution product is continuous, the right-side of~\eqref{eq:partial_sum} converges to $\left( \sum_{k=1}^\infty k\alpha_kg^{k-1}\right)*Dg$. The lemma now follows from the uniqueness of limits. \end{proof} \begin{prop} \label{p:Dlog} For any continuous derivation $D$ of $\cA$ and $f \in \cA$, $D(\Exp(f)) = \Exp(f)*Df$. \end{prop} \begin{proof} By applying Lemma~\ref{l:contd} to the series $\sum_{k=0}^{\infty} X^k/k!$ we conclude that $D\Exp(f) = \Exp(f)*Df$ for any $f \in \cA_0$. In general, since $\ker D \supseteq \Cc$, it follows that for $f \in \cA$, \begin{align*} D\Exp(f) &= D(\exp(f(1))*\Exp(f-f(1)))\\ &= \exp(f(1))*D(\Exp(f-f(1)))\\ &= \exp(f(1))*\Exp(f-f(1))*D(f-f(1))\\ &= \Exp(f)*Df. \end{align*} \end{proof} \begin{cor} \label{c:EL-KerD} Suppose $\Delta$ is a set of continuous derivations of $\cA$. Then $f \in \ker\Delta$ if and only if $\Exp(f) \in \ker\Delta$. Moreover, if $f \in \cA_{+}$ then $f \in \ker \Delta$ if and only if $\Log f \in \ker\Delta$. \end{cor} \begin{proof} By Proposition~\ref{p:Dlog}, $D(\Exp(f)) = Df * \Exp(f)$ for any $D \in \Delta$. Since $\Exp(f) \neq 0$, the first assertion follows. The second assertion follows from the first because for $f \in \cA_{+}$, $f = \Exp(\Log(f))$. \end{proof} \begin{prop} \label{p:der_Exp} Suppose $f_1,\ldots, f_n \in \cA$ and $D_1,\ldots, D_n$ are continuous derivations of $\cA$. Then for any $k \in \Zz$ such that $\Exp^k f_i$ is defined for all $1 \le i \le n$, \begin{align*} \det(D_j f_i) = 0 \iff \det(D_j \Exp^{k} f_i) = 0. \end{align*} \end{prop} \begin{proof} It suffices to show that for any $g_1,\ldots, g_n \in \cA$, $\det\left( D_j g_i \right) = 0$ if and only if $\det\left( D_j \Exp g_i \right) = 0$. But this follows immediately from Proposition~\ref{p:Dlog}, since \begin{align*} \det\left( D_j \Exp g_i \right) = \det\left( \Exp(g_i)*D_j g_i \right) = \det\left( D_j g_i \right)\prod_{i = 1}^n \Exp(g_i) \end{align*} and $\Exp g \neq 0$ for any $g \in \cA$. \end{proof} As another application of Proposition~\ref{p:Dlog}, let us compute the function $\kappa := \Log \vec{1}$. On the one hand, $\partial_L\vec{1} = \partial_L \kappa *\vec{1}$. On the other hand, \begin{align*} \partial_L \vec{1}(n) = \log(n) = \sum_{p|n} v_p(n)\log p = \sum_{p^j | n} \log p = (\Lambda * \vec{1})(n). \end{align*} So $\partial_L \kappa = \Lambda$ is the von Mangoldt's function. Thus, \begin{align*} \kappa(n) = \begin{cases} \dfrac{\Lambda(p^j)}{\log(p^j)} = \dfrac{1}{j} &\text{if $n = p^j$ for some prime $p$ and $j \ge 1$;} \\ 0 &\text{otherwise.} \end{cases} \end{align*} For $g \in \cA$, let $\mf_g$ denote the $\Cc$-linear map from $\cA$ to itself defined by $\mf_g(f) = g \cdot f$ (pointwise product). It is clear that $\norm{\mf_g(f)} \le \norm{f}$. Thus, $\mf_g$ preserves null sequences and hence is continuous by linearity. It is also clear that $\mf_h$ is the compositional inverse of $\mf_g$ if and only if $h$ is the pointwise multiplicative inverse of $g$. If $g$ is {\bf completely additive}, i.e. $g(nm) = g(n) + g(m)$ for all $n,m \in \Nn$, one checks that $\mf_g$ is a (continuous) derivation of $\cA$ and vice versa. For example, $\mf_{\log}$ is simply the log-derivation $\partial_L$. We will use the more suggestive notation $\partial_g$ for $\mf_g$ in case it is a derivation. A completely additive function is determined by its action on the primes and its value at 1 must be $0$. Besides the real logarithm, the $p$-adic valuation $v_p$, and the function $\Omega$, which counts (with multiplicity) the total number of prime factors of its argument, are some examples of completely additive function. If $g$ is {\bf completely multiplicative}, i.e. $g \neq 0$ and $g(nm) = g(n)g(m)$ for all $n,m \in \Nn$, one checks that $\mf_g$ is a nonzero (continuous) $\Cc$-algebra endomorphism of $\cA$ and vice versa. If, in addition, $g$ vanishes nowhere then its pointwise multiplicative inverse is also completely multiplicative. Thus $\mf_g$ is a continuous automorphism of $\cA$. For example, $\mf_{\mathbf{I}}$, where $\mathbf{I}$ is the identity map of $\Nn$, is a continuous automorphism of $\cA$. A completely multiplicative function is determined by its action on the primes and its value at $1$ must be $1$. Besides the identity function, the map $n \mapsto n^\alpha$ ($\alpha \in \Cc$) and $\vec{1}_p$ are some examples of completely multiplicative functions. We conclude this section by an observation that will be used a number of times in Section~\ref{sec:mg-ind}. \begin{lem} \label{l:ord_inv} For any $f,g \in \cA$, $p \in \Pp$ and $i \in \Zz$ such that $\mf_g^i$ is defined, $v(\partial_p f) \le v(\partial_p \mf_g^i(f))$. Moreover, the equality holds if $g(m) \neq 0$ for all $m >1$. \end{lem} \begin{proof} For any $n \ge 1$, \begin{align} \label{eq:ord_pre} \partial_p \mf^i_g(f) (n) &= v_p(np)(g(np))^if(np) = (g(np))^i\partial_pf(n). \end{align} Thus $\partial_p f(n) =0$ implies $\partial_p \mf^i_g(f)(n) = 0$ and so the inequality in the lemma holds. Furthermore, if $g(np) \neq 0$ for all $n$, the reverse implication is also true. Thus, $\partial_p f$ and $\partial_p \mf_g^i f$ must have the same order. \end{proof} \section{Ax's Theorem for $\cA$} \label{sec:Ax4A} Our main observation is simple: Ax's Theorem holds for $(\cA, + , *)$. \begin{thm} \label{th:sa} Suppose $\cC = \ker_{\cF} \Delta$ for some set $\Delta$ of continuous derivations of $\cA$. Let $f_1,\dots, f_n$ be arithmetic functions such that either \begin{enumerate}[{\upshape(}$1${\upshape)}] \item \label{i:power} $\Exp(f_1),\dots, \Exp(f_n)$ are multiplicatively independent modulo $\cC^{\times}$; or \item \label{i:lin-ind} $f_1,\ldots, f_n$ are $\Qq$-linearly independent modulo $\cC$. \end{enumerate} Then \begin{align*} \td_{\cC}\cC(f_1,\dots,f_n, \Exp(f_1),\dots, \Exp(f_n)) \ge n + \rk(Df_i)_{\begin{smallmatrix} D \in \Delta \\ 1 \le i \le n \end{smallmatrix}} \end{align*} \end{thm} \begin{proof} Take the field $F$ in Theorem~\ref{th:genax} to be $\cF$ and $C = \cC = \ker_{\cF}\Delta$. Let $y_i = f_i$ and $z_i = \Exp f_i$ ($i =1,\dots,n$). Then by Proposition~\ref{p:Dlog}, $Dy_i = Dz_i/z_i$ for all $D \in \Delta$ and $1 \le i \le n$. Therefore, Condition~(a) in Theorem~\ref{th:genax} holds. Condition~\eqref{i:power} and~\eqref{i:lin-ind} now translate into Condition~(b) and~(b$^\prime$) in Theorem~\ref{th:genax}, respectively and so the inequality about the transcendence degree follows. \end{proof} As our first illustration of the power of Ax's theorem, we use it to deduce the following generalization of Theorem~5.3 of~\cite{aids}. For $f \in \cA_+$ and $g \in \cA$, we write $f^g$ as a shorthand for the function $\Exp(g*\Log f)$. \begin{thm} \label{th:Log_power} Let $\Delta$ be a set of continuous derivations of $\cA$ and $\cC = \ker_{\cF} \Delta$. Suppose $f \in \cA_+ \setminus \ker \Delta$ and $1=c_0, c_1, \ldots, c_n \in \ker\Delta$ are linearly independent over $\Qq$, then $\Log f, f=f^{c_0}, f^{c_1},\ldots, f^{c_n}$ are algebraically independent over $\cC$. \end{thm} \begin{proof} By Corollary~\ref{c:EL-KerD}, $\Log f \notin \ker \Delta$. Thus $D_0 \Log f \neq 0$ for some $D_0 \in \Delta$. We claim that $f = f^{c_0}$, $f^{c_1},\ldots, f^{c_n}$ are multiplicatively independent modulo $\cC^{\times}$. Suppose not, then there exist integers $k_0,\ldots, k_n$ not all zeros such that \begin{align*} f^{k_0}f^{k_1c_1}\cdots f^{k_nc_{n}} =\Exp\left( (k_0 + k_1c_1+ \dots + k_nc_n)\Log f \right) \end{align*} belongs to $\cC \cap \cA = \ker \Delta$. An application of Corollary~\ref{c:EL-KerD} yields $(k_0 + k_1c_1 + \dots +k_nc_n) \Log f \in \ker \Delta$. In particular, \begin{align*} 0 &= D_0( (k_0 + k_1c_1 + \cdots + k_nc_n)\Log(f)) \\ &= (k_0 + k_1c_1 + \cdots + k_{n}c_{n})D_0(\Log f). \end{align*} Since $D_0(\Log f) \neq 0$, that means $k_0+k_1c_1 + \dots + k_nc_n$ must be zero, contradicting the assumption that $1,c_1,\dots, c_n$ are $\Qq$-linearly independent. This establishes the claim. Now by applying Theorem~\ref{th:sa} to the $n+1$ functions $c_i\Log f$ ($0 \le i \le n$), we conclude that the transcendence degree of the field \begin{align*} \cC(c_i\Log f,f^{c_i})_{0 \le i \le n} = \cC(\Log f,f,f^{c_1},\ldots, f^{c_n}) \end{align*} over $\cC$ is at least \begin{align*} (n+1) + \rk_{\cF}(D\Log f, c_iD\Log f)_{\begin{smallmatrix} D \in \Delta \\ 1 \le i \le n \end{smallmatrix}}. \end{align*} Since $D_0\Log f \neq 0$, the rank appeared above is $1$. This establishes the algebraic independence of $\Log f,f, f^{c_i}$ ($1 \le i \le n$) over $\cC$. \end{proof} \begin{cor} With the same notation as in Theorem~\ref{th:Log_power}, $\Log f$ is transcendental over $\cC(f,f^{c_1},\ldots, f^{c_n})$ for any $c_1,\ldots, c_n \in \ker \Delta$. \label{c:trans} \end{cor} \begin{proof} By re-indexing, if necessary, $1=c_0, c_1,\ldots, c_m$ (for some $0 \le m \le n)$ form a basis of the $\Qq$-span of $1,c_1,\ldots, c_n$. By Theorem~\ref{th:Log_power}, $\Log f$ is transcendental over $\cC(f,f^{c_1}, \ldots, f^{c_m})$. Since each $c_i$ ($0 \le i \le n$) is a $\Qq$-linear combination of $1,c_1,\ldots, c_m$, each $f^{c_i}$ is algebraic over $\cC(f,f^{c_1},\ldots, f^{c_m})$ and so the corollary follows. \end{proof} The following corollary is a very special case of Corollary~\ref{c:trans}. We refer the reader to~\cite[Section~5]{aids} for its consequences. \begin{cor} \label{c:log_zeta_power} For any complex numbers $c_1,\dots,c_n$, $\log \zeta$ is transcendental over $\Cc(\zeta^{c_1},\dots,\zeta^{c_n})$. In particular, $\log \zeta$ is transcendental over $\Cc(\zeta)$. \end{cor} \begin{proof} By invoking the isomorphism in~\eqref{eq:isods}, it suffices to show that the function $\kappa = \Log \vec{1}$ is transcendental over $\Cc(\vec{1}, \vec{1}^{c_1},\ldots, \vec{1}^{c_n})$ but that follows immediately from Corollary~\ref{c:trans} by taking $\Delta = \{\partial_L\}$ and $f = \vec{1}$. \end{proof} The central result about algebraic independence of arithmetic functions is the following criterion of Shapiro and Sparer~\cite[Theorem~3.1]{aids}. We refer the reader to~\cite{aids,imaf} and~\cite{obs} for its numerous applications. \begin{thm*} \label{th:Jac} Let $f_1,\ldots, f_n$ be arithmetic functions. Suppose $D_1, \ldots, D_n$ are derivations of $\cA$ such that $\det(D_jf_i) \neq 0$ then $f_1, \ldots, f_n$ are algebraically independent over $\ker\{D_1,\ldots, D_n\}$. \end{thm*} As our second illustration of the power of Ax's Theorem, we use it to strengthen the Jacobian criterion when the derivations involved are continuous. \begin{thm} \label{th:gJac} Let $f_1, \ldots, f_n \in \cA$. Suppose $D_1,\ldots, D_n$ are continuous derivations of $\cA$ such that $\det\left( D_j f_i \right) \neq 0$ then the set of arithmetic functions \begin{align*} \Exp^*\{f_i \colon 1 \le i \le n\} \end{align*} is algebraically independent over $\ker\{D_1,\dots, D_n\}$. \end{thm} \begin{proof} Let $\cC=\ker_{\cF}\{D_1,\ldots, D_n\}$ and $k_0 \ge 0$ be the largest integer such that for each $1 \le i \le n$, $g_i := \Log^{k_0} f_i$ is defined. It then suffices to show that for any $m \ge 1$, the set of arithmetic functions \begin{align*} \{\Exp^k g_i \colon 0 \le k \le m, 1 \le i \le n\} \end{align*} is algebraically independent over $\cC \supseteq \ker\{D_1,\ldots, D_n\}$. We will prove this by induction on $m$. First, let us argue that $g_1,\ldots, g_n$ are $\Qq$-linearly independent modulo $\cC$. Suppose some $\Qq$-linear combination $\sum r_ig_i$ of the $g_i$'s belongs to $\cC$ then by applying $D_j$ ($1 \le j \le n$) to the linear combination we obtain a system of $n$ linear equations: \begin{align*} \sum_{i=1}^n r_i D_j g_i = 0 \qquad (1 \le j \le n). \end{align*} Since $\det\left( D_j f_i \right) \neq 0$, by Proposition~\ref{p:der_Exp} $\det\left( D_j g_i \right) \neq 0$ as well and so the $r_i$ ($1 \le i \le n$) must be all zero. This establishes the claim. Now we can apply Theorem~\ref{th:sa} to $g_1,\ldots, g_n$ and conclude that \begin{align*} \td_{\cC}\cC(g_1,\dots, g_n, \Exp(g_1),\dots, \Exp(g_n)) \ge n + \rk(D_j g_i). \end{align*} Again since $\det(D_jg_i) \neq 0$, the $\cF$-rank of $(D_j g_i)$ is $n$. This establishes the algebraic independence of $g_i, \Exp(g_i)$ ($1 \le i \le n$) over $\cC$, i.e. the case $m=1$ of the theorem. For the induction step, suppose the functions $\Exp^k(g_i)$ $(0 \le k \le m, 1 \le i \le n)$ are algebraically independent over $\cC$ for some $m \ge 1$. In particular, these functions are $\Qq$-linearly independent modulo $\cC$ and we conclude from Theorem~\ref{th:sa} that the transcendence degree of the field \begin{align*} \cE: = \cC&(\Exp^k(g_i) \colon 0 \le k \le m+1, 1 \le i \le n) \end{align*} over $\cC$ is at least $n(m+1) + \rk V$ where $V$ is the set of vectors \begin{align*} \{ (D_j \Exp^k(g_i))_{1 \le j \le n} \colon 0 \le k \le m, 1 \le i \le n\}. \end{align*} Again because $\det\left( D_j g_i \right) \neq 0$, the $\cF$-rank of $V$ is at least (in fact exactly) $n$. Consequently, the transcendence degree of $\cE$ over $\cC$ is $(m+2)n$. This establishes the induction step and hence the theorem. \end{proof} Theorem~\ref{th:gJac}, strictly speaking, is not a generalization of the Jacobian criterion because it requires the derivations involved to be continuous. However, to the best of our knowledge, all existing applications of this criterion involve only the log-derivation and the basic derivations so to all of them Theorem~\ref{th:gJac} is applicable. In the next two sections, we will generalize a number of results in~\cite{aids, imaf} and~\cite{obs} in various directions. \section{Algebraic Independence} \label{sec:algdep} We begin this section with a very special case of Theorem~\ref{th:gJac} when only a single derivation is involved. \begin{prop} \label{p:keracl} Let $D$ be a continuous derivation of $\cA$ and $f \notin \ker D$. Then $\Exp^*\{f\}$ is algebraically independent over $\ker D$. In particular, $\ker D$ is algebraically closed in $\cA$. \end{prop} Proposition~\ref{p:keracl} generalizes Proposition~2.1 of~\cite{imaf}. For example, by taking $D = \partial_L$, one sees that $\Cc$ is algebraically closed in $\cA$ and that $\Log(f), f, \Exp(f)$ are algebraically independent over $\Cc$ for $f \in \cA_+\setminus \Cc$. We should point out that the kernel of a derivation of $\cA$, whether continuous or not, is always algebraically closed in $\cA$. As a matter of fact, the argument given for that in~\cite{aids} (see Lemma~2.1 of~\cite{aids}) works for any characteristic zero integral domain. From Proposition~\ref{p:keracl}, we can also deduce the following generalization of Theorem~2.1 of~\cite{aids}. \begin{thm} \label{th:fgsa} Let $f \in \cA$ and $(g_i)_{i\in I}$ be a family of arithmetic functions. Suppose \begin{align*} [\supp f] \not\subseteq \bigcup_{i \in I} [\supp g_i] \end{align*} then $\Exp^*\{f\}$ is algebraically independent over the subalgebra of $\cA$ generated by the $g_i$ $(i \in I)$. \end{thm} \begin{proof} By the assumption there is a prime $p \in [\supp f]$ that is not in the union of the $[\supp g_i]$ ($i \in I$). So by Proposition~\ref{p:keracl}, $\Exp^*\{f\}$ is algebraically independent over $\ker \partial_p$ which contains the subalgebra of $\cA$ generated by the $g_i$ ($i \in I$). \end{proof} We provide a proof of one of the many consequences of~\cite[Theorem~2.1]{aids}. The reader can consult~\cite[p.697-699]{aids} for the others. \begin{cor} \label{c:S_is_acl} $\cS$ is algebraically closed in $\cA$. \end{cor} \begin{proof} Suppose $g_1, \ldots, g_n \in \cS$ and $f \in \cA \setminus \cS$. Then $[\supp f]$ is infinite while the union of $[\supp g_i]$ ($1 \le i \le n$) is finite. So it follows from Theorem~\ref{th:fgsa} that $\Exp^*\{f\}$, in particular $f$ itself, is algebraically independent over $\Cc[g_1,\ldots, g_n]$. Since $g_i \in \cS$ ($1 \le i \le n$) are taken arbitrarily, we conclude that $f$ is algebraically independent over any finitely generated subalgebra of $\cS$ and hence over $\cS$ itself. \end{proof} \begin{exa} \label{ex:1_tran_T} The function $\vec{1}$ is not a member of $\cS$ so by Corollary~\ref{c:S_is_acl} it is transcendental over $\cS$ and hence over $\cT$. In terms of Dirichlet series, that means the Riemann zeta function is transcendental over the subalgebra of {\bf Dirichlet polynomials} (Dirichlet series with only finitely many nonzero terms). \end{exa} In contrast, $\cT$ is not algebraically closed in $\cA$ (in fact, not even in $\cS$). For instance, $\vec{1}_2 = \sum_{k=0}^\infty e_2^k$ is in $\cS\setminus \cT$ but it is algebraic over $\cT$ since its inverse $1-e_2$ is in $\cT$. This shows, in particular, that the algebra of Dirichlet polynomials is not algebraically closed in the algebra of convergent Dirichlet series. \begin{thm} \label{th:tri} Let $f_1,\dots, f_n$ be arithmetic functions. Suppose there exist $D_1,\dots, D_n$ continuous derivations of $\cA$ such that for each $1 \le i < j \le n$, \begin{align*} f_i \in \ker D_j \setminus \ker D_i. \end{align*} Then the set of arithmetic functions $\Exp^*\{f_1,\dots, f_n\}$ is algebraically independent over $\ker\{D_1,\dots, D_n\}$. \end{thm} \begin{proof} It is an immediate consequence of Theorem~\ref{th:gJac}. This is because the assumption implies $\left( D_j f_i \right)$ is a lower triangular matrix with non-zero entries on its diagonal hence $\det \left( D_j f_i \right) \neq 0$. \end{proof} \begin{cor} \label{c:tri_supp} Let $f_1,\dots, f_n \in \cA$. Suppose there exist $p_1,\ldots, p_n$ such that for each $1 \le i < j \le n$ \begin{align*} p_j \in [\supp f_j] \setminus [\supp f_i], \end{align*} then the set of arithmetic functions $\Exp^*\{f_1,\dots, f_n\}$ is algebraically independent over the kernel of $\{\partial_{p_1},\dots, \partial_{p_n} \}$. \end{cor} \begin{proof} Take $D_j$ in Theorem~\ref{th:tri} to be $\partial_{p_j}$ ($1 \le i \le n$). \end{proof} \begin{exa} \label{ex:e_p} Let $p_1,\dots, p_n$ be distinct primes. By taking $f_i = e_{p_i}$ ($1 \le i \le n-1$) and $f_n = \vec{1}_\Pp$ in Corollary~\ref{c:tri_supp}, the $\Cc$-algebraic independence of $\Exp^*\{e_{p_1},\ldots, e_{p_{n-1}}, \vec{1}_\Pp\}$ follows. Moreover, since $n$ is arbitrary, that means the set of arithmetic functions \begin{align*} \Exp^*(\{e_p \colon p \in \Pp\} \cup \{\vec{1}_\Pp\}), \end{align*} is algebraically independent over $\Cc$. \end{exa} \begin{exa} \label{ex:two} Corollary~\ref{c:tri_supp} generalizes Lemma~3 of~\cite{obs}: Suppose $f_1,f_2 \in \cA \setminus \Cc$ with $[\supp f_1] \neq [\supp f_2]$. Without loss of generality, there is a prime $p_2 \in [\supp f_2]$ but not in $[\supp f_1]$. Since $f_1$ is not in $\Cc$, there exists a prime $p_1 \in [\supp f_1]$. Thus Corollary~\ref{c:tri_supp} implies $\Exp^*\{f_1,f_2\}$ is algebraically independent over $\Cc$. In particular, if $F(s)= \sum \alpha_n/n^s$ is a non-constant formal Dirichlet series such that $\alpha_n=0$ whenever $n$ is a multiple of some fix prime $p$, then $F(s)$ and $\zeta(s)$ are algebraically independent over $\Cc$. \end{exa} Knowing that a function is non-vanishing at a particular point certainly implies that it is nonzero. The following proposition is hence a corollary of Theorem~\ref{th:gJac}. We invite the reader to prove it (or see~\cite[Corollary~2.3]{imaf} for a proof) by checking the left-side of Equation~\eqref{eq:gvj} expresses the value of $\det\left( \partial_{p_j} f_i \right)$ at $m$. \begin{prop} \label{p:gvj} For any $f_1,\ldots, f_n \in \cA$, if there exist distinct primes $p_1,\ldots, p_n$ such that for some $m \in \Nn$, \begin{align} \label{eq:gvj} \sum_{k_1\dotsm k_n =m} \left(\prod_{j=1}^n v_{p_j}(k_jp_j)\right) \det(f_i(k_jp_j)) \neq 0, \end{align} then the set of arithmetic functions $\Exp^*\{f_1,\ldots, f_n\}$ is algebraically independent over $\ker\{\partial_{p_1}, \ldots, \partial_{p_n}\}$. \end{prop} By setting the $m$ in Proposition~\ref{p:gvj} to various values, one obtains strengthened versions of Test I--IV in~\cite{imaf}. These tests were used to establish algebraic independence of various Fibonacci and Lucas zeta functions~\cite[Proposition~2.5, 2.6]{imaf}. We state here only the simplest case, i.e. $m=1$. \begin{cor} \label{c:ind_at_primes} Suppose $f_1,\dots, f_n$ are arithmetic functions such that $\det(f_i(p_j)) \neq 0$ for some primes $p_1,\ldots, p_n$. Then the set of functions \begin{align*} \Exp^*\{f_i \colon 1 \le i \le n\} \end{align*} is algebraically independent over $\ker\{\partial_{p_j} \colon 1 \le j \le n\}$. \end{cor} \begin{exa} \label{ex:1_and_1p} For any distinct primes $p_1,\dots, p_{n}$, take $f_i = \vec{1}_{p_i}$ ($1 \le i \le n-1$) and $f_{n} = \vec{1}$, then \begin{align*} \det\left( f_i(p_j) \right) = \begin{pmatrix} 1 & 0 &\dots & 0 \\ 0 & 1 &\dots & 0 \\ \vdots & \vdots &\ddots & \vdots \\ 1 & 1 & \dots & 1 \end{pmatrix} = 1. \end{align*} Thus by Corollary~\ref{c:ind_at_primes}, $\Exp^*\{\vec{1}_p, \vec{1} \colon p \in \Pp \}$ is algebraically independent over $\Cc$. \end{exa} \begin{exa} \label{ex:tau*} The function $\tau_*:=(\vec{1}-1)^2$ which counts the number of proper factors of its argument and $\vec{1}_\Pp$ are algebraically independent over $\Cc$. This is because $\partial_p \vec{1}_\Pp = 1$ for every prime $p$, so \begin{align*} \det \begin{pmatrix} \partial_2 \tau_* & \partial_3 \tau_* \\ \partial_2 \vec{1}_\Pp & \partial_3 \vec{1}_\Pp \end{pmatrix} & = \partial_2 \tau_* - \partial_3 \tau_* \end{align*} and its value at $4$ is $v_2(8)\tau_*(8) - v_3(12)\tau_*(12) = 2 \neq 0$. Note that Corollary~\ref{c:ind_at_primes} cannot be used to establish this fact since $\tau_*$, or more generally any member of the square of the maximal ideal of $\cA$, vanishes at every prime. \end{exa} For $f_1,\dots, f_n \in \cA$, let $\mu_{d}(\vec{f})$ be the minimum of $\norm{P(f_1,\dots, f_n)}$ taken over all complex polynomials $P$ of total degree $d$. The function $d \mapsto \mu_d(\vec{f})$ can be viewed as a quantitative measure of algebraic independence of $f_1,\dots, f_n$ over $\Cc$. Several results about this measure were proved in~\cite{imaf}. Our method, due to its non-constructive nature, cannot produce those results. However, the non-quantitative part of both Theorem~3.2 and Theorem~3.4 of~\cite{imaf} can be generalized as follows. \begin{thm} \label{th:m_j<=ord} Let $f_1,\ldots, f_n \in \cA$ and $D_1,\ldots, D_n$ be continuous derivations of $\cA$. Suppose $m_1,\ldots, m_n \in \Nn$ such that $m_j \le v(D_jf_i)$ for all $1 \le i, j \le n$ and that $\det\left( D_jf_i(m_j) \right) \neq 0$ then the set of functions \begin{align*} \Exp^*\{f_i \colon 1 \le i \le n\} \end{align*} is algebraically independent over $\ker\{D_1,\ldots, D_n\}$. \end{thm} \begin{proof} By taking $a_i = 1$ and $b_j = m_j$ $(1 \le i,j \le n)$ in Proposition~\ref{p:det_val}, we conclude that the value of $\det\left( D_j f_i \right)$ at $m_1\cdots m_n$ is $\det\left( D_j f_i(m_j) \right)$ which is assumed to be nonzero. The algebraic independence statement now follows form Theorem~\ref{th:gJac}. \end{proof} We can arrive to the same conclusion of Theorem~\ref{th:m_j<=ord} if $m_i \le v(D_jf_i)$ for all $1 \le i, j \le n$: the same proof goes through by taking $a_i = m_i$ and $b_j =1$ ($1 \le i,j \le n$). The next lemma is another easy consequence of Proposition~\ref{p:det_val}. The same is true, more generally, for generalized Dirichlet series (see~\cite[Lemma~8.8]{aids}). \begin{lem} \label{l:dep_at_primes} Suppose $f_1,\ldots, f_n$ are non-zero arithmetic functions and $p_1,\ldots, p_n$ are $n$ distinct primes such that the Jacobian $\det(\partial_{p_j}f_i)$ is zero then $\det(v_{p_j}(v f_i)) = 0$. \end{lem} \begin{proof} Let $m_i$ be the order of $f_i$ $(1 \le i \le n)$. Note that for $1 \le i, j \le n$, $0 < m_i/p_j \le v(\partial_{p_j}f_i)$. So by taking $a_i = m_i$ and $b_i = 1/p_i$ ($1 \le i \le n$) in Proposition~\ref{p:det_val}, we have \begin{align*} \det\left( \partial_{p_j} f_i \right)\left( \prod_{k=1}^n \frac{m_k}{p_k} \right) &= \det\left( \partial_{p_j}f_i\left( \frac{m_i}{p_j}\right) \right)\\ = \det\left( v_{p_j}(m_i)f_i(m_i) \right) &= \left(\prod_{i=1}^n f_i(m_i) \right)\det\left( v_{p_j}(m_i) \right). \end{align*} The lemma follows since $f_i(m_i)$ is non-zero for each $i$. \end{proof} Lemma~\ref{l:dep_at_primes} was used to prove Theorem~7 in~\cite{obs}. It states that a set of nonzero non-invertible arithmetic functions is algebraically independent over $\Cc$ if the norms of its members are pairwise relatively prime. Essentially the same proof yields a more general result: \begin{thm} \label{th:no-trivial-prod} Suppose $W$ is a set of non-zero arithmetic functions whose orders are multiplicatively independent then $\Exp^*W$ is algebraically independent over $\Cc$. \end{thm} \begin{proof} Suppose on the contrary that $\Exp^*W$ is algebraically dependent over $\Cc$, then there exist $f_1,\ldots, f_n \in W$ such that \begin{align*} \Exp^*\{f_1,\ldots, f_n\} \end{align*} is algebraically dependent over $\Cc$. So for any choice of distinct primes $p_1,\ldots, p_n$, $\det\left( \partial_{p_j}f_i \right) = 0$ by Theorem~\ref{th:gJac}, as a result $\det(v_{p_j}(vf_i)) = 0$ by Lemma~\ref{l:dep_at_primes}. That means the set of vectors \begin{align*} \left\{ \begin{pmatrix} v_p(v f_1) \\ \vdots \\ v_p( vf_n) \end{pmatrix} \colon p \in \Pp \right\} \end{align*} has $\Qq$-rank strictly less than $n$. Since it has the same $\Qq$-rank as the set \begin{align*} \{(v_p(v f_i))_{p \in \Pp} \colon 1 \le i \le n\}, \end{align*} there exist $k_1, \ldots, k_n \in \Zz$ not all zero such that for each prime $p$, \begin{align*} 0 = \sum_{i=1}^n k_iv_p(v f_i) = v_p\left( \prod_{i=1}^n (v f_i)^{k_i} \right). \end{align*} That means $\prod_{i=1}^n (v f_i)^{k_i} = 1$ contradicting the assumption that the orders $v(f_i)$ ($1 \le i \le n$) are multiplicatively independent. \end{proof} \begin{exa} \label{ex:e_n} By Theorem~\ref{th:no-trivial-prod} the set $\Exp^*\{e_{n_1},\dots, e_{n_k}\}$ is algebraically independent over $\Cc$ if $v(e_{n_i}) = n_i$ ($1 \le i \le n$) are multiplicatively independent. The converse is also true and it follows easily from the fact that $e_m*e_n = e_{mn}$ for any $n,m \in \Nn$. Thus for a set of natural numbers $N$, the necessary and sufficient condition for \begin{align*} \Exp^*\{e_n \colon n \in N\} \end{align*} to be algebraically independent over $\Cc$ is that the elements of $N$ are multiplicatively independent. Note that Theorem~7 in~\cite{obs} alone does not imply this fact since there are multiplicatively independent numbers such as 2 and 6 that are not relatively prime. \end{exa} \section{$\mf_g$-Transcendence} \label{sec:mg-ind} In this section we will establish some criteria for algebraic independence of images of a single arithmetic function under operators of the form $\mf_g$. Let $\cB$ be a subalgebra of $\cA$, we say that an arithmetic function $f$ is {\bf $\mf_g$-transcendental} over $\cB$ if $\{\mf_g^j f \colon j \in J\}$ algebraically independent over $\cB$ where $J = \Nn \cup \{0\}$ if $\mf_g$ is not invertible, otherwise $J = \Zz$. \begin{thm} \label{th:[supp f]>n} Let $f, g$ be arithmetic functions. Suppose $p_1,\dots, p_n \in [\supp f]$ such that $g(v(\partial_{p_j} f)p_j)$ $(1 \le j \le n)$ are distinct and nonzero. Then for any $k \ge 0$, the set of functions \begin{align*} \Exp^*\{\mf_g^i f \colon k \le i \le k+n-1\} \end{align*} is algebraically independent over $\ker\{\partial_{p_1},\dots, \partial_{p_n}\}$. Moreover, if $g$ is nowhere vanishing then the same is true for any integer $k$. \end{thm} \begin{proof} Let $f_i = \mf_g^i f$ ($k \le i \le k+n-1$) and $m_j = v(\partial_{p_j}f)$ $(1 \le j \le n)$. By Lemma~\ref{l:ord_inv}, $m_j \le v(\partial_{p_j} f_i)$ for all $k \le i \le k+n-1$ and $1 \le j \le n$. So by Theorem~\ref{th:m_j<=ord} it suffices to show that \begin{align*} \det\left(\partial_{p_j}f_i(m_j)\right) & = \det\left( v_{p_j}(m_jp_j)(g(m_jp_j))^if(m_jp_j) \right)\\ &= \det\left( (g(m_jp_j))^{i-k} \right)\prod_j \partial_{p_j}f(m_j)(g(m_jp_j))^k \end{align*} does not vanish. This is indeed the case because for each $j$, $m_j$ is the order of $\partial_{p_j}f$ hence $\partial_{p_j}f(m_j) \neq 0$ and $g(m_jp_j) \neq 0$ by our assumption on $g$; moreover $g(m_jp_j)$ $(1 \le j \le n)$ are assumed to be distinct, so the last determinant is Vandermonde. Finally, nothing in the argument above prevents $k$ from being negative so long as $\mf_g^k$ is defined but that precisely requires $g$ to be nowhere vanishing. \end{proof} \begin{exa} \label{ex:1Q} Let $Q$ be a nonempty finite set of primes. Since for $q \in Q = [\supp \vec{1}_Q]$, \begin{align*} \log(v(\partial_q \vec{1}_Q)q) = \log(q) \end{align*} are all distinct and nonzero, it follows from Theorem~\ref{th:[supp f]>n} (by taking $g$ to be the real logarithm) that $\vec{1}_Q$ does not satisfy any differential algebraic equation with respect to $\partial_L$ of order less than $|Q|$ over the kernel of $\{\partial_q \colon q \in Q\}$. \end{exa} The assumption ``$g(v(\partial_{p_j} f)p_j)$ $(1\le j \le n)$ are distinct'' in Theorem~\ref{th:[supp f]>n} is necessary. Consider, for example, the function $e_n$. It satisfies the following linear differential equation: \begin{align*} \partial_L X - \log(n)X = 0 \end{align*} and its support is generated by the prime divisors of $n$. Therefore, the conclusion of Theorem~\ref{th:[supp f]>n} is false when $n$ has more than one prime factor. Note also that for $f= e_n$ the assumption on $g$ in Theorem~\ref{th:[supp f]>n} cannot be met by any arithmetic function since $v(\partial_p e_n)p = (n/p)p = n$ for all $p \in [\supp e_n]$. This example also shows that the assumption ``$n_ip_i$ are distinct'' is needed for Corollary~3.5 of~\cite{imaf}. The following lemma is a rather simple observation about algebraic independence of arithmetic functions over $\cS$. Since it will be called upon several times, we include it here for the record. For a set of primes $I$, let $\Delta_I$ be the set of basic derivations indexed by $I$, i.e. $\{\partial_p \colon p \in I\}$. We write $\Delta_f$ for $\Delta_{[\supp f]}$. \begin{lem} \label{l:cof2S} Let $I$ be a set of primes. If $E$ is a set of arithmetic functions that is algebraically independent over $\ker \Delta_J$ for any co-finite subset $J$ of $I$, then $E$ is algebraically independent over $\cS$. \end{lem} \begin{proof} It suffices to show that $E$ is algebraically independent over every finitely generated subalgebra of $\cS$. Suppose $\cH$ is a subalgebra of $\cS$ generated by some $h_0,\ldots, h_d \in \cS$. Since the sets $[\supp h_i]$ ($0 \le i \le d$) are finite so is their union $H$. Therefore, $E$, by assumption, is algebraically independent over the kernel of $\Delta_{I\setminus H}$. We can conclude that $E$ is algebraically independent over $\cH$ since each derivation in $\Delta_{I \setminus H}$ kills every $h_i$ ($0 \le i \le d$). \end{proof} \begin{thm} \label{th:m_g-trans} Let $g \in \cA$ be eventually injective and $f \in \cA\setminus \cS$. The set of functions \begin{align*} \Exp^*\{\mf_g^i f \colon i \ge 0\} \end{align*} is algebraically independent over the kernel of any infinite subset of $\Delta_f$, and hence over $\cS$. In addition, if $g$ is nowhere vanishing, then $i$ can range through the integers. \end{thm} \begin{proof} Since $f \notin \cS$, $\Delta_f$ is infinite and so are its co-finite subsets. Let $J$ be an arbitrary infinite subset of $[\supp f]$, once we established that $E :=\Exp^*\{\mf_g^i f \colon i \ge 0\}$ is algebraically independent over $\ker \Delta_J$ then its algebraic independence over $\cS$ follows from Lemma~\ref{l:cof2S}. Since $g$ is eventually injective, there exists $n_0 \in \Nn$ such that $g$ is injective and non-vanishing on $\{n \in \Nn \colon n \ge n_0\}$. We choose an infinite sequence from $J$ inductively as follows: pick $p_1 \in J$ larger than $n_0$ and $p_{j+1} \in J$ such that \begin{align*} p_{j+1} > v(\partial_{p_j} f)p_j\qquad (j \ge 1). \end{align*} Then $v(\partial_{p_j}f)p_j$ $(j \ge 1)$ form a strictly increasing sequence and so the $g(v(\partial_{p_j} f)p_j)$ are nonzero and distinct. Note that every finite subset of $E$ is contained in $\Exp^*\{\mf_g^i f \colon k \le i \le k+n-1\}$ for some $k \ge 0, n \ge 1$. According to Theorem~\ref{th:[supp f]>n}, the latter set is algebraically independent over $\ker\{\partial_{p_j} \colon 1 \le j \le n\} \supseteq \ker\Delta_J$. So we conclude that $E$ is algebraically independent over $\ker \Delta_J$. In addition, if $g$ is nowhere vanishing, then Theorem~\ref{th:[supp f]>n} and hence the whole argument goes through for $E:=\Exp^*\{\mf_g^i f \colon i \in \Zz\}$. \end{proof} Rather curiously, for a completely additive function $g$ to be injective means the set of complex numbers $g(\Pp)$ is $\Qq$-linearly independent; and for a completely multiplicative $g$ to be injective means $g(\Pp)$ is multiplicatively independent. In any case, even if one requires $\mf_g$ in Theorem~\ref{th:m_g-trans} to be a derivation or an automorphism of $\cA$, there are still plenty arithmetic functions that satisfy the requirements. \begin{exa} \label{ex:zeta-hypertrans} By taking the function $g$ in Theorem~\ref{th:m_g-trans} to be the real logarithm, we conclude that $\vec{1}$ is $\partial_L$-transcendental (better known as {\bf hyper-transcendental}) over $\cS$. In particular, that means the Riemann zeta function $\zeta(s)$ is hyper-transcendental over $\Cc$. Lemma~3.1 in~\cite{aids} states that the identity function (of a complex variable $s$) is transcendental over the ring of complex functions (in $s$) defined by Dirichlet series which have a proper right half-plane of convergence. Thus we conclude that $\zeta(s)$ is hyper-transcendental over $\Cc(s)$. We refer the reader to~\cite{fibo} for some historical remarks of this result which is usually attributed to Hilbert~\cite{hilbert} in the literature. \end{exa} \begin{exa} \label{ex:carlitz} For $k \in \Zz$, let $\mathbf{I}_k$ be the arithmetic function $n \mapsto n^k$. In~\cite{car}, Carlitz showed that $\mathbf{I}_k$ ($k \ge 0$) are algebraically independent over $\Cc$. Shapiro and Sparer generalized this result to the algebraic independence of $\mathbf{I}_k\ (k \in \Zz)$ over the kernel of any infinite set of basic derivations (and hence over $\cS$)~\cite[Theorem~3.2]{aids}. By taking $g = \vec{I}$ the identity map of $\Nn$ and $f = \vec{1}$ in Theorem~\ref{th:m_g-trans}, we conclude more generally that $\Log \mathbf{I}_k, \mathbf{I}_k$ ($k \in \Zz$) are algebraically independent over the kernel of any infinite set of basic derivations (and hence over $\cS$). \end{exa} By fixing $f$ to $\vec{1}$, one can view Theorem~\ref{th:m_g-trans} as a result about algebraic independence of, $g^{\gen{k}} := \mf_g^k(\vec{1})$ $(k \ge 0)$, the powers of $g$ with respect to the pointwise product. In fact, since $[\supp \vec{1}] = \Pp$ and $v(\partial_p \vec{1}) =1$ for each $p$, so an assumption weaker than eventual injectivity of $g$ is enough to guarantee algebraic independence. More precisely, we have: \begin{cor} \label{c:g^<k>} $\Exp^*\{g^{\gen{i}} \colon i \ge 0\}$ is algebraically independent over $\Cc$ if $g(\Pp)$ is infinite and is algebraically independent over $\cS$ if $g(I)$ is infinite for every infinite set of primes $I$. Moreover, the same is true with $i$ ranging through the integers if $g$ is nowhere vanishing. \end{cor} Corollary~\ref{c:g^<k>}, in particular, implies if $g(\Pp)$ is infinite then $g$ does not satisfy, in the algebra $(\cA, +, \cdot)$, any nontrivial polynomial equation over $\Cc$. We will have another discussion about this kind of independence in Section~\ref{sec:remarks}. Let $(U_n)$ be a linear integral recurrence of order two, by that we mean $(U_n)$ is a sequence of integers satisfying \begin{align*} U_{n+2} = PU_{n+1} - QU_{n} \qquad (n \ge 1) \end{align*} for some $P,Q \in \Zz$ with $Q \neq 0$. Suppose $\rho$ is a ratio (the other being $1/\rho$) of the two roots of the characteristic polynomial $z^2 -Pz + Q$. Morgan Ward showed in~\cite[Theorem~1]{ward} that the set $\{U_n \colon n \ge 1\}$ has infinitely many prime divisors if either (1) $\rho$ is not a root of unity, or (2) $\rho = 1$. In the first case, the recurrence $(U_n)$ is called {\bf non-degenerate}. Thus, if $U \subseteq \Nn$ is the set of terms of a non-degenerate second order linear integral recurrence, then $\vec{1}_U \notin \cS$. By Theorem~\ref{th:m_g-trans}, we conclude that $\vec{1}_U$ is $\mf_g$-transcendental over $\cS$ for any $g$ that is eventually injective. \begin{exa} \label{ex:Fibo} The linear recurrence defining the Fibonacci numbers: $F_1 = 1, F_2 =1$ and $F_{n+2} = F_{n+1} + F_n$ is second order and non-degenerate. Thus $\vec{1}_F$, the indicator of function of the Fibonacci numbers, is hyper-transcendental over $\Cc$. By an argument similar to the one given in Example~\ref{ex:zeta-hypertrans}, we conclude that the Fibonacci zeta function $\zeta_F(s)$ is hyper-transcendental over $\Cc(s)$. \end{exa} Our next result generalizes both~\cite[Theorem~3.3]{aids} and~\cite[Theorem~3]{obs} by relaxing the assumption that $\supp f$ contains infinitely many primes to that $\supp f$ is not finitely generated. The proof below is a mixture of the those given in~\cite{aids} and~\cite{obs}. Therefore, our sole contribution here is the realization that these proofs remain valid in a more general setting. We also hope our use of the lexicographic ordering on the index set can clarify the presentation. In the following, $T^{\alpha}$ ($\alpha \in \Cc$) stands for the operator $\mf_g$ where $g$ is the function $n \mapsto n^{\alpha}$. \begin{thm} \label{th:diff-diff} For any $f \in \cA \setminus \cS$ and any sequence $(\alpha_i)_{i \ge 0}$ of complex numbers with distinct real parts, the set of arithmetic functions \begin{align*} \Exp^*\{T^{\alpha_i} \partial_L^j f \colon i,j \ge 0\} \end{align*} is algebraically independent over the kernel of any infinite subset of $\Delta_f$ and consequently over $\cS$. \end{thm} \begin{proof} Since $f \in \cA\setminus \cS$, $[\supp f]$ is infinite and so are its co-finite subsets. So by Lemma~\ref{l:cof2S}, we only need to show is that for any $k,m \ge 0$, the set of functions \begin{align*} \Exp^*\{f_{ij} \colon 0 \le i \le k, 0 \le j \le m \}, \end{align*} where $f_{ij} := T^{\alpha_i}\partial_L^j f$, is algebraically independent over the kernel of any infinite subset of $\Delta_f$. Let \begin{align*} L = \{(a,b) \colon 0 \le a \le k, 0 \le b \le m\} \end{align*} be the index set ordered lexicographically. If no confusion arise, we follow the convention of indexing matrix entries by writing the index $(a,b)$ as $ab$. Given $J$ an infinite subset of $[\supp f]$, we are going to choose a sequence of primes $(p_{uv} \colon (u,v) \in L)$ from $J$. Let $m_{uv}$ be the order of $\partial_{p_{uv}} f$. By applying Lemma~\ref{l:ord_inv} twice, we conclude $m_{uv} = v(\partial_{p_{uv}} f_{ij})$ for any $(i,j) \in L$. We claim that the determinant of the $|L| \times |L|$ matrix, \begin{align*} \left( \partial_{p_{uv}}f_{ij}(m_{uv}) \right) &= \left(\prod_{(u,v) \in L} \partial_{p_{uv}}f(m_{uv})\right)\left( (m_{uv}p_{uv})^{\alpha_i}(\log (m_{uv}p_{uv}))^j\right) \end{align*} is non-zero if we impose suitable requirements on the sequence $(p_{uv})$. Once this is achieved, it then follows from Theorem~\ref{th:m_j<=ord} that the set of arithmetic functions $\Exp^*\{f_{ij} \colon (i,j) \in L \}$ is algebraically independent over $\ker\{\partial_{p_{uv}} \colon (u,v) \in L\} \supseteq \ker \Delta_J$. Since $\partial_{p_{uv}} f(m_{uv}) \neq 0$ for each $(u,v) \in L$, it suffices to make the determinant of the matrix \begin{align*} P := \left( (m_{uv}p_{uv})^{\alpha_i}(\log(m_{uv}p_{uv}))^j \right) \end{align*} non-zero. To achieve that, first note that the entries of $P$ are all nonzero and hence each term in the expansion of $\det P$ is nonzero. By re-arranging the $\alpha_i$ ($1 \le i \le n$), if necessary, we can assume their real parts form a strictly increasing sequence. Let $t_{\diag}$ denote the product of the diagonal entries of $P$, i.e. \begin{align*} t_{\diag} = \prod_{(u,v) \in L} (m_{uv}p_{uv})^{\alpha_u}(\log(m_{uv}p_{uv}))^{v}. \end{align*} The key observation is that the ratio $t/t_{\diag}$, where $t$ is any other term in the expansion of $\det P$, has the form \begin{align*} \prod_{(u,v) \in L} (m_{uv}p_{uv})^{\gamma(u,v)}(\log(m_{uv}p_{uv}))^{d(u,v)} \end{align*} and if $(u',v') \in L$ is the largest index such that $(\gamma(u',v'), d(u',v'))$ is not $(0,0)$, then $(\Re(\gamma(u',v')), d(u',v')) < (0,0)$ lexicographically. Therefore, if we choose $(p_{uv})$ an increasing sequence of primes from $J$ such that each $p_{uv}$ is sufficiently large compare to its predecessors, for example, pick $p_{00} \ge 3$ (to ensure $\log p_{uv} > 1$ for all $(u,v) \in L)$) and $p_{uv}$ such that \begin{align*} \log p_{uv} > |L|! \prod_{(u',v') < (u,v)} (m_{u'v'}p_{u'v'})^{|\alpha_k|+m}, \end{align*} then $|t/t_{\diag}| < (|L|!)^{-1}$. Thus for such a choice of $(p_{uv})$, \begin{align*} |\det P| \ge |t_{\diag}|\left(1 - \sum_{t \neq t_{\diag}} |t/t_{\diag}|\right) > 0. \end{align*} \end{proof} A couple remarks about Theorem~\ref{th:diff-diff}. First, arithmetic functions of the form $n^{\alpha_i}(\log n)^j f(n)$ ($j \in \Zz$) were considered in both~\cite{aids} and~\cite{obs}. This is problematic for negative $j$ since these functions are not defined at $1$ and consequently their higher convolution powers are undefined. Second, if Theorem~\ref{th:diff-diff} admits an ``algebraic'' proof, by that we mean a proof similar to that of Theorem~\ref{th:m_g-trans} which does not rely on the growth rate of the functions involved, then one may expect a generalization about operators of the form $\mf_h^i\mf_g^j$. \begin{cor} \label{c:2ndrec} Suppose $U$ is a set of natural numbers with an infinite set of prime divisors, then $\zeta_U(s)$ does not satisfy any nontrivial algebraic differential difference equation over $\Cc(s)$. \end{cor} \begin{proof} Since $\vec{1}_U \notin \cS$, Theorem~\ref{th:diff-diff} implies the set of arithmetic functions $\{T^{\alpha_i}\partial_L^j\vec{1}_U \colon i,j \ge 0\}$ is algebraically independent over $\Cc$ for any complex sequence $(\alpha_i)$ with distinct real parts. Since $(-1)^jT^{\alpha_i}\partial_L^j \vec{1}_U$ corresponds to $\zeta_U^{(j)}(s - \alpha_i)$ under the isomorphism in~\eqref{eq:isods}, the corollary is true over $\Cc$. Finally, by Lemma~3.1 of~\cite{aids}, it is true for $\Cc(s)$ since the formal Dirichlet series involved are convergent. \end{proof} \begin{exa} \label{ex:ost} Corollary~\ref{c:2ndrec} implies a classical result of Ostrowski \cite{ost}: $\zeta(s)$ does not satisfy any nontrivial algebraic differential difference equation over $\Cc(s)$. That means there is no non-zero polynomial $F(s, z_1,\ldots, z_k)$ over $\Cc$ such that the function \begin{align*} F(s, \zeta^{(m_1)}(s-r_1), \ldots, \zeta^{(m_k)}(s-r_k)), \end{align*} where $(m_i,r_i)$ are distinct pairs of integers and $m_i \ge 0$ for all $1 \le i \le k$, vanishes identically on its domain. \end{exa} \begin{exa} \label{ex:fibo} Recall that if $U \subseteq \Nn$ is the set of terms of a non-degenerate second order linear recurrences, then $\vec{1}_U \notin \cS$. Thus, Corollary~\ref{c:2ndrec} also implies the Fibonacci zeta function $\zeta_F(s)$ does not satisfy any nontrivial algebraic differential difference equation over $\Cc(s)$. Since it is not known whether the Fibonacci sequence contains infinitely many primes, this statement cannot be deduced, at least for now, from either Theorem~3.3 of~\cite{aids} or Theorem~3 of~\cite{obs}. Many sequences of natural numbers, well-known to number theorists, are in fact non-degenerate second order integral linear recurrences (see~\cite{koshy} for a reference): Lucas sequence, Pell sequence and Pell-Lucas sequence, to name a few. Thus their zeta functions do not satisfy any non-trivial algebraic difference-differential equations over $\Cc(s)$. More generally, one can replace ``algebraic'' by ``holomorphic'' in the previous statement, if one invokes an analytic result of Reich~\cite[Satz~1]{reich} instead of Theorem~\ref{th:diff-diff}. This is the way in which Steuding~\cite[Theorem~1]{fibo} and Komatsu~\cite[Corollary~1]{komatsu} obtained the corresponding results for the Riemann zeta function and the Lucas zeta function, respectively. In~\cite{fibo}, Steuding made no reference to Ward's paper~\cite{ward} but did mention that his argument obviously can be extended to other Dirichlet series that built from linear recurrence. \end{exa} \section{Remarks} \label{sec:remarks} We conclude with a few observations that we made along the way of studying arithmetic functions. The first one is about derivations of $\cA$. As noted, Theorem~\ref{th:gJac} will be an unconditional generalization of Shapiro-Sparer's Jacobian criterion if every derivation of $\cA$ is continuous. Unfortunately, we can neither prove that every derivation of $\cA$ is continuous nor produce one that is not. There is indeed a construction given at the end of Section~4 in~\cite[p.309--312]{conv} which produces nonzero derivations of $\cF$ that vanish on the $e_n$ ($n \in \Nn$) and hence $\cT$. Since $\cF$ is the field of fractions of $\cA$, any such derivation must also be nonzero on $\cA$ but then it cannot be continuous since $\cA$ is the closure of $\cT$ in $\cF$. However, it is unclear to us that any derivation constructed this way actually restricts to a map from $\cA$ to itself. Here we would like to offer a similar but hopefully simpler way of constructing derivations $\cF$ that do not preserve null sequences of $\cA$: start with a null sequence in $\cA$ that is algebraically independent over $\Cc$, for example $(e_p)_{p \in \Pp}$. Extend it to a transcendence base $B$ of $\cF$ over $\Cc$. Then $db (b \in B)$ form a $\cF$-basis of $\Omega_{\cF/\Cc}$~\cite[Theorem~26.5]{mat}. The derivation $D$ of $\cF$ obtained by composing $d$ with the $\Cc$-linear map determined by $db \mapsto 1$ ($b \in B$) maps each $e_p$ to $1$ and hence cannot be a continuous derivation of $\cA$ if it does restrict to a map from $\cA$ to itself. The flip side of the coin is that every derivation of $\cA$ is continuous. This will be true if the topology determined by the norm $\norm{\cdot}$ is equivalent to the $\cI$-adic topology of some ideal $\cI$ of $\cA$. This is because for any $n \ge 1$, and $f \in \cI^n$, the derivative $f$ with respect to any derivation of $\cA$, according to the Leibniz rule, is in $\cI^{n-1}$ and so any derivation of $\cA$ is continuous with respect to the topology determined by any ideal of $\cA$. We should point out, however, in the case when $\cI$ is the unique maximal ideal $\cA_0$ these two topologies are inequivalent. For example, none of the term in the null sequence $(e_p)$ is even in $\cA_0^2$ because members of $\cA_0^2$ vanish on every prime. Our second observation is about linear independence of arithmetic functions over $\Cc$. It was proved~\cite[Theorem~3.2--3.4]{imaf2} that arithmetic functions $f_1,\ldots, f_n$ are linearly dependent over $\Cc$ if and only if their Wronskian with respect to the log-derivation, i.e. \begin{align*} \wL{f_1}{f_n}{n-1}, \end{align*} vanishes identically. We claim that the same is true, more generally, for elements of $\cF$ and offer a softer proof in the sense that no formula for the values of Wronskian is needed. We take advantage of a standard result of differential fields~\cite[Theorem~6.3.4]{afg} which asserts that elements of a differential field $(F,D)$ are linearly dependent over the field of constants if and only if their Wronskian with respect to $D$ (or $D$-Wronskian, in short) is zero. Thus by taking the differential field to be $(\cF, \partial_L)$, all we need to show is that the kernel of the log-derivation in $\cF$ is still $\Cc$. Before proving that statement, it is probably worth pointing out that in general $\ker_{\cF} D$ needs not be the fraction field of $\ker D$ in $\cF$: recall that $\Omega$ counts the total number of prime factors with multiplicity of its argument. One checks readily that $\ker \dO = \Cc$ and $\dO e_p = e_p$ for each prime $p$. Thus for distinct primes $p$ and $q$, $e_p/e_q \in \ker_{\cF} \dO \setminus \Cc$. \begin{prop} $\ker_{\cF} \partial_L = \Cc$. \label{p:kernel} \end{prop} \begin{proof} First, $\ker_{\cF} \partial_L \supseteq \ker \partial_L = \Cc$. To establish the reverse inclusion, take any $f,g \in \cA\setminus\{0\}$ such that $\partial_L (f/g) = 0$ then \begin{align} \label{eq:quo} \partial_L f * g = f * \partial_L g. \end{align} If $g$ is invertible in $\cA$, $f/g \in \cA \cap \ker_{\cF}\partial_L = \Cc$. So let us assume $g$ is not invertible; that is $g(1) =0$, it then follows that $\norm{\partial_L g} = \norm{g} (> 0)$. Now by taking norm on both sides of~\eqref{eq:quo}, we see that $\norm{\partial_L f}=\norm{f} (> 0)$. Thus, by evaluating both sides of~\eqref{eq:quo} at $v(f)v(g)$, we conclude that $\log(v(f)) = \log(v(g))$ and hence $v(f) = v(g)$. Let $k$ be this common value and $h$ be $f-\alpha g$ where $\alpha= f(k)/g(k)$. Then the order of $h$ is strictly greater than $k$ and $h/g = f/g- \alpha \in \ker_{\cF} \partial_L$. So unless $h=0$, i.e. $f/g = \alpha \in \Cc$, otherwise the same argument with $f$ replaced by $h$ will lead us to the contradicting conclusion that $v(h) = v(g) =k$. This completes the proof of the other inclusion. \end{proof} Viewing the linear independence result of~\cite{imaf2} as one about differential fields frees us from focusing on the log-derivation: if the Wronskian of $f_1,\ldots, f_n$ with respects to any derivation $D$ of $\cF$ is non-zero, then $f_1,\dots, f_n$ is linearly independent over $\ker_{\cF} D$ and hence over $\Cc$. Let us give an application. Recall that $g^{\gen{k}}$ ($k \ge 0$) denotes the $k$-th power of $g$ with respect to the pointwise product. Consider again the function $\Omega$. The value at 1 of the $\partial_2$-Wronskian of $\vec{1}=\Omega^{\gen{0}}, \Omega^{\gen{1}}, \ldots, \Omega^{\gen{n}}$ is \begin{align*} \det\left( \partial_2^j \Omega^{\gen{i}}(1) \right) &= \det\left( j! \Omega^{\gen{i}}(2^j) \right) =\det\left( j^i \right)\prod_{j=0}^n j! \end{align*} which is nonzero since the last determinant is Vandermonde. We conclude that $\Omega^{\gen{k}}$ ($k \ge 0$) are linearly independent over $\Cc$. Therefore, the $\partial_L$-Wronskian of $\vec{1}, \Omega^{\gen{1}}, \ldots, \Omega^{\gen{n}}$ must also be nonzero but this is harder to spot since its value at 1 is 0. This also shows that $\Omega$ does not satisfy any nontrivial polynomial equation over $\Cc$ in the $\Cc$-algebra $(\cA, +, \cdot)$. Note that this fact cannot be deduced from Corollary~\ref{c:g^<k>} since $\Omega(\Pp) = \{1\}$ is finite. Note also that this statement is stronger than asserting $\Omega$ is transcendental over $\Cc$ in the sense of Bellman and Shapiro~\cite{B-S}. Roughly speaking, since $(\cA,+, \cdot)$ is not an integral domain, so the ``right'' definition for algebraic dependence requires not just a nontrivial polynomial but an irreducible one to vanish at the functions involved. Our last few remarks are about Theorem~\ref{th:gJac} and Section~2 of~\cite{imaf2}. In searching for a generalization of the Jacobian criteria, we realize that the derivations in Theorem~\ref{th:gJac} cannot be replaced by differential operators. More precisely, consider, for each $k \in \Nn$, the differential operator $\partial_k := \prod_{p} \partial_p^{v_p(k)}$ (here the product is composition of functions). One checks that for $f \in \cA$ and $n \in \Nn$, \begin{align*} (\partial_k f)(n) = f(kn) \prod_p\prod_{j =1}^{v_p(k)}\left( v_p(n)+j \right). \end{align*} In particular, $(\partial_k f)(1) = f(k)\prod_p (v_p(k)!)$. Thus, if we normalize $\partial_k$ to \begin{align*} \hat{\partial}_k = \left( \prod_p (v_p(k)!) \right)^{-1}\partial_k, \end{align*} then we will have $\varepsilon_1 \circ \hat{\partial}_k = \varepsilon_k$. To see that Theorem~\ref{th:gJac} fails if we replace the derivations by differential operators, take $f_1$ to be $\vec{1}_2$ and $f_2 = f_1*f_1$. Note that \begin{align*} f_2(n) = \begin{cases} k+1 & \text{if}\ n = 2^k\ \text{for some $k \ge 0$;} \\ 0 & \text{otherwise}. \end{cases} \end{align*} Certainly, $f_1$ and $f_2$ are algebraically dependent over $\Cc$ but \begin{align*} \det \begin{pmatrix} \hat{\partial}_2 f_1 & \hat{\partial}_4 f_1 \\ \hat{\partial}_2 f_2 & \hat{\partial}_4 f_2 \end{pmatrix} (1) = \det \begin{pmatrix} f_1(2) & f_1(4) \\ f_2(2) & f_2(4) \end{pmatrix} = \det \begin{pmatrix} 1 & 1 \\ 2 & 3 \end{pmatrix} = 1. \end{align*} Incidentally, this shows that Theorem~2.2 of~\cite{imaf2} is not true. Moreover, \begin{align*} \partial_L f_1(n) = \begin{cases} k\log(2) & \text{if}\ n = 2^k\ \text{for some $k \ge 0$;} \\ 0 &\text{otherwise}. \end{cases} \end{align*} Thus $f_1$ satisfies the following differential algebraic equation over $\Cc$: \begin{align*} \partial_L X = \log(2)(X^2-X). \end{align*} This falsifies Corollary~2.3--2.5 of~\cite{imaf2}. In particular, it is not true that a Dirichlet series which is not a Dirichlet polynomial is hyper-transcendental over $\Cc$. Corollary~2.6 and~2.7 of~\cite{imaf2} are also problematic. Again the pair $(f_1,f_2)$ furnishes a counterexample to Corollary~2.7 of~\cite{imaf2} which asserts that arithmetic functions with infinite supports are algebraically independent over $\Cc$. Since $f_1,f_2$ are algebraically dependent over $\Cc$, so are the arithmetic functions $g_1,g_2$ defined by \begin{align*} \begin{pmatrix} g_1 \\ g_2 \end{pmatrix} = \begin{pmatrix} f_1(2) & f_1(4) \\ f_2(2) & f_2(4) \end{pmatrix}^{-1} \begin{pmatrix} f_1 \\ f_2 \end{pmatrix} = \begin{pmatrix} \phantom{-}3 & -1 \\ -2 & \phantom{-}1 \end{pmatrix} \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}. \end{align*} Consequently, $h_1:=g_1-2$ and $h_2:=g_2+1$ are also algebraically dependent over $\Cc$. Since the first four values of $h_1$ and $h_2$ are $(0,1,0,0)$ and $(0,0,0,1)$, respectively, the pair $(h_1,h_2)$ provides a counterexample to Corollary~2.6 of~\cite{imaf2}. {\bf Acknowledgements.} First of all, I would like to thank George Jennings for a discussion that led me to realize my mistake in generalizing Theorem~\ref{th:gJac}. I would also like to thank Alexandru Buium for discussing with me the existence of non-continuous derivations of $\cA$. Our conversations saved me from going down several dead-end paths. I thank the referee for helpful suggestions on improving the presentation of several results in this article, especially Theorem~\ref{th:Log_power}, Theorem~\ref{th:diff-diff} and Proposition~\ref{p:kernel}. \input{addendum}
{ "timestamp": "2017-01-18T02:01:26", "yymm": "1504", "arxiv_id": "1504.03263", "language": "en", "url": "https://arxiv.org/abs/1504.03263" }
\section{Introduction}\label{sec:intro} Modifying an earlier idea by Born~\cite{Born1933}, in 1934 Born and Infeld \cite{BornInfeld1934} suggested a nonlinear modification of vacuum electrodynamics in order to get rid of the infinite self-energies of point particles that occur in the standard Maxwell theory. Their theory can be derived from a Lorentz-invariant Lagrangian. A few years later, Heisenberg and Euler~\cite{HeisenbergEuler1936} derived an effective Lagrangian, again Lorentz-invariant, from quantum electrodynamics. These are the two best known examples within the class of all Lorentz-invariant nonlinear electrodynamical theories. More generally, Pleba{\'n}ski \cite{Plebanski1970} and also Boillat~\cite{Boillat} studied the whole class of nonlinear electrodynamical theories that can be derived from a Lagrangian depending only on the two Lorentz-invariant scalars that are quadratic in the field strength. This class is often referred to as \emph{Pleba{\'n}ski nonlinear electrodynamics}. For a review of the Born-Infeld theory we refer, e.g., to Bia{\l}ynicki-Birula~\cite{Bialynicki}. The physical relevance of these nonlinear vacuum electrodynamical theories is being widely discussed in the literature. It is believed that at a certain field strength the Heisenberg-Euler deviations from standard Maxwell theory should be observable, and the Born-Infeld theory has gained increasing attention since it was realized by Tseytlin \cite{Tseytlin1999} that the Born-Infeld Lagrangian can be derived as an effective Lagrangian from some versions of string theory. Observable effects of (nonlinear) modifications of the vacuum Maxwell equations have been discussed for many years, at least since the Ph.D. thesis of Toll~\cite{Toll:1952rq}. Up to now, the only effect predicted by such modified theories that has already been observed is light-by-light scattering (see \cite{Burkeetal1997}); further experiments are under way, e.g. with the Large Hadron Collider at CERN \cite{EnterriaSilveira2013}. There is also an ongoing experiment \cite{Dellavalleetal2013} aiming at verifying the birefringence \textit{in vacuo} as predicted by the Heisenberg-Euler theory. Also, it might be possible to measure the influence of background fields on the propagation speed of light in the laboratory. For the case of the Born-Infeld theory, such experiments have been suggested with the help of wave guides by Ferraro \cite{Ferraro2007} and with homogeneous magnetic background fields by Dereli and Tucker \cite{DereliTucker2010}. In this paper we focus on another method for testing nonlinear electrodynamics and discuss its theoretical foundations in detail. The basic idea is to measure the influence of a (strong) background field on the propagation speed of light with the help of an interferometer. Such an experiment has been suggested and discussed already in five earlier papers~\cite{Boer,Denisov,Doebrich,Zavattini,Grote}. However, all of them restrict the theoretical discussion to the Heisenberg-Euler theory or, in the case of Denisov \textit{et al.} \cite{Denisov}, to the Heisenberg-Euler and the Born-Infeld theory. What is still missing is a comprehensive derivation of the relevant equations that cover the whole Pleba{\'n}ski class. The basic idea of the experiment is simple. In the standard Maxwell vacuum theory, which is linear, the superposition principle holds, so there is no influence of a background field on the propagation of light. In the nonlinear theories, however, the phase velocity of light depends on the strength of the background field and on the propagation direction relative to the background field. This can be tested with a Michelson interferometer: If a strong background field is switched on and off in one interferometer arm, or if the whole interferometer is being rotated in a strong background field, the interference pattern should change. A null result would place bounds on the possible deviations from standard Maxwell vacuum theory. It is the purpose of this paper to develop the theoretical foundations for this experimental test for an unspecified nonlinear electrodynamical theory of the Pleba{\'n}ski class. We will then specify to Born, Born-Infeld and Heisenberg-Euler theory. Throughout this paper, we consider Minkowski space as the underlying space-time model. We work in inertial coordinates, so the Minkowski metric is $(\eta ^{ik})=\mathrm{diag}(1,1,1,-1)$. We use Einstein's summation convention for Latin indices taking values 1,2,3,4 and for Greek indices taking values 1,2,3. Indices are raised and lowered with the Minkowski metric. We will use Gauss\-ian cgs units throughout, because they are most convenient for our theoretical investigations. In these units, $E$, $B$, $D$ and $H$ are all measured in the same units, $\sqrt{\mathrm{g}}/ \big(\sqrt{\mathrm{cm}} \, \mathrm{s} \big)$. The reader can easily convert the results into SI units with the help of the formulas $E = \sqrt{4 \pi \epsilon _0} E_{\mathrm{SI}}$, $B = \sqrt{4 \pi / \mu _0} B_{\mathrm{SI}}$, $D = \sqrt{4 \pi / \epsilon _0} D_{\mathrm{SI}}$ and $H = \sqrt{4 \pi \mu _0} H_{\mathrm{SI}}$ \cite{Jackson5te}. For example, for a field $X = 10^3 \sqrt{\mathrm{g}} / \big( \sqrt{\mathrm{cm}} \, \mathrm{s} \big)$ in Gaussian cgs units, where $X = E$, $B$, $D$, or $H$, one gets \begin{equation}\label{eq:SI} \begin{split} E_{\mathrm{SI}} = 3 \times 10^7 \, \dfrac{\mathrm{V}}{\mathrm{m}} \, , \qquad B_{\mathrm{SI}} = 100 \, \mathrm{mT} \, , \quad \\ D_{\mathrm{SI}} = 3 \times 10^{-4} \, \dfrac{\mathrm{As}}{\mathrm{m}^2} \, , \qquad H_{\mathrm{SI}} = 8 \times 10^4 \, \dfrac{\mathrm{A}}{\mathrm{m}} \, . \end{split} \end{equation} The paper is organized as follows. In Sec.~\ref{Vorbereitung} we recall the basic equations for the propagation of light rays according to nonlinear electrodynamics. Then in Sec.~\ref{experiment} the suggested interferometer experiment is described and in Sec.~\ref{konkretes} some particular applications are discussed. \section{Light propagation in nonlinear electrodynamics}\label{Vorbereitung} \subsection{The Pleba{\'n}ski class of nonlinear electrodynamical theories}\label{subsec:Pleb} The nonlinear electrodynamical theories which are at the center of our examination derive from an action \begin{equation} S[A_m]=\frac{1}{4\pi c}\int _M \left(\mathcal{L}(F_{mn})+\frac{4\pi}{c}\,j^mA_m \right)\,\mathrm dV_4 \,. \end{equation} Here $j^m$ is a \emph{given} current density, $A_m$ is the electromagnetic potential, $F_{mn} = \partial _m A_n-\partial _n A_m$ is the electromagnetic field strength and $\mathcal{L}$ is the Larangian for the electromagnetic field. Then the homogeneous group of Maxwell's equations is automatically satisfied, \begin{equation}\label{eq:Max1} \partial_{[a} F_{bc]}=0 \, . \end{equation} Variation of the action with respect to the potential $A_m$ leads to the inhomogeneous group of Maxwell's equations, \begin{equation}\label{eq:Max2} \partial_b H^{ab} \, = \, \frac{4\pi}{c}\,j^a\,, \end{equation} where \begin{equation}\label{eq:const} H^{ab}=-\frac{\partial \mathcal{L}}{\partial F_{ab}} \end{equation} is the electromagnetic excitation. It is the constitutive law (\ref{eq:const}) that distinguishes different theories, while the Maxwell equations (\ref{eq:Max1}) and (\ref{eq:Max2}) are always the same. Following Pleba{\'n}ski \cite{Plebanski1970}, we require that the electromagnetic Lagrangian $\mathcal{L}$ depends on the electromagnetic field strength only via the Lorentz invariants \begin{equation}\label{eq:FG} F=\frac{1}{2}\,F_{mn}F^{mn} \quad \text{and} \quad G=-\frac{1}{4}\,F_{mn}\tilde F^{mn} \, . \end{equation} Here and in the following, the tilde denotes the Hodge dual, \begin{equation} \tilde F^{mn}\, = \, \frac{1}{2} \, \varepsilon^{mnab}F_{ab} \, . \end{equation} As usual, $\varepsilon^{abcd}$ is the totally antisymmetric Levi-Civita tensor with $\varepsilon^{1234}=-1$. Strictly speaking, only $F$ is invariant under \emph{all} Lorentz transformations while $G$ changes sign under a parity transformation. Some authors restrict to Lagrangians that satisfy the equation $\mathcal{L}(F,G)=\mathcal{L}(F,-G)$ to assure invariance under parity transformations. However, for the purpose of this paper there is no need for this restriction. The Pleba{\'n}ski class contains, of course, the standard vacuum Maxwell theory which is given by the Lagrangian \begin{equation}\label{eq:LagMax} \mathcal{L}{}_M (F,G) = \, - \, \dfrac{1}{2} \, F \, . \end{equation} As this theory is well tested for weak fields, many authors restrict their work to theories where the Lagrangian satisfies $\mathcal{L} + F/2 \to 0$ for $F_{mn} \to 0$. Again, for our mathematical considerations there is no need for this restriction. For a theory of the Pleba{\'n}ski class the constitutive law (\ref{eq:const}) can be written, more specifically, as \begin{equation}\label{eq:Hab} H^{ab}= -2\, \mathcal{L}_F \, F^{ab}+\mathcal{L}_G \,\tilde F^{ab} \end{equation} where \begin{equation}\label{eq:LFLG} \mathcal{L}_F = \dfrac{\partial \mathcal{L}}{\partial F} \, , \quad \mathcal{L}_G = \dfrac{\partial \mathcal{L}}{\partial G} \, . \end{equation} Later we will also write \begin{equation}\label{eq:LFLG2} \mathcal{L}_{FF} = \dfrac{\partial ^2 \mathcal{L}}{\partial F ^2} \, , \quad \mathcal{L}_{GG} = \dfrac{\partial ^2 \mathcal{L}}{\partial G ^2} \, , \quad \mathcal{L}_{FG} = \dfrac{\partial ^2 \mathcal{L}}{\partial F \partial G} \, . \end{equation} Additionally we will use a Hamiltonian formulation of the Pleba{\' n}ski electrodynamics. For the special case of the Born-Infeld theory, the Hamiltonian formulation can be found in~\cite{BornInfeld1934,Bialynicki} Whereas the Lagrangian depends on the field strength, the Hamiltonian depends on the excitation. Quite generally, for any theory based on a Lagrangian $\mathcal{L}(F_{mn})$, the passage to the Hamiltonian formalism can be performed whenever the constitutive law (\ref{eq:const}) can be solved for $F_{mn}$. The Hamiltonian is then given by a covariant Legendre transformation, \begin{equation}\label{eq:Hamilton} \mathcal{H}(H^{ab})=-\frac{1}{2}H^{mn}F_{mn}-\mathcal{L}(F_{ab}) \end{equation} where, on the right-hand side, $F_{ab}$ has to be expressed in terms of $H^{mn}$ with the help of the constitutive law. For a theory of the Pleba{\'n}ski class (i.e., if the Lagrangian depends only on $F$ and $G$), the Hamiltonian is a function of the two invariants \begin{equation}\label{eq:RS} R=-\frac{1}{2} \,H^{ab}H_{ab} \quad \text{and} \quad S=\frac{1}{4}H_{ab}\tilde H{}^{ab} \, . \end{equation} The relevant equations for the passage from the Lagrangian to the Hamiltonian description are given in the Appendix. There we will also give a criterion that guarantees that the constitutive law (\ref{eq:Hab}) can be solved for the field strength, at least locally. \subsection{Three-dimensional notation of field equations} In the following we will often use three-vector notation. The field strength has the three-dimensional representation \begin{equation}\label{eq:EB} E_{\alpha} = F_{\alpha 4} \, , \quad B^\alpha=\frac12\,\varepsilon^{\alpha\beta\gamma}F_{\beta\gamma} \, = \, - \, \tilde{F}{}^{\alpha 4} \, , \end{equation} where $\varepsilon ^{\alpha \beta \gamma}$ is the totally antisymmetric spatial Levi-Civita tensor, $\varepsilon ^{123}=1$. With $E^2 = \delta ^{\mu \nu} E_{\mu}E_{\nu}$, $B^2 = \delta _{\mu \nu} B^{\mu}B^{\nu}$ and $\mathbf{E} \cdot \mathbf{B} = E_{\mu}B^{\mu}$, the invariants (\ref{eq:FG}) read \begin{equation}\label{eq:EB1} F= B ^2- E ^2 \, , \quad G= \mathbf B \cdot \mathbf E \, . \end{equation} Analogously we write for the excitation \begin{equation}\label{eq:DH} D^{\alpha} = - H^{\alpha 4} \, , \quad H_{\alpha}=\frac12\,\varepsilon_{\alpha\beta\gamma}H^{\beta\gamma} \, = \, \tilde{H}{}_{\alpha 4} \, , \end{equation} which implies that the invariants (\ref{eq:RS}) are given by \begin{equation}\label{eq:DH1} R= D^2-H^2 \, , \quad S= \mathbf{D} \cdot \mathbf{H} \, . \end{equation} Then the constitutive law (\ref{eq:Hab}) reads \begin{equation} \begin{split} &D_{\alpha}=-2 \mathcal{L} _F E_{\alpha} \,+ \mathcal{L}_G B_{\alpha} \, ,\\ &H_{\alpha} =-2 \mathcal{L} _F B_{\alpha} \,- \mathcal{L}_G E_{\alpha} \, \,. \end{split} \end{equation} \subsection{Phase velocity and characteristic differential equation for $\mathbf{\mathcal{L}(F,G)}$ theories} The characteristic surfaces determined by a set of partial differential equations can be defined as the hypersurfaces along which the solutions may have discontinuities. As an alternative, the characteristic surfaces can also be defined with the help of approximate plane waves; in this second approach, they come about as the high-frequency limit of the surfaces of constant phase. In view of applications to electrodynamics, the first approach is discussed, e.g., in the book by Hehl and Obukhov \cite{HehlObukhov2003}. The characteristic surfaces are hypersurfaces $\psi = \mathrm{constant}$, where the gradient of $\psi$ has to satisfy, at each point of space-time, a fourth-order equation which is known as the \emph{dispersion relation} or as the \emph{Fresnel equation}. If viewed as a partial differential equation for $\psi$, this equation is usually called the \emph{characteristic equation} or the \emph{eikonal equation}. Using this approach, Obukhov and Rubilar~\cite{ObukhovRubilar2002} have determined the Fresnel equation (i.e., the characteristic equation) for an arbitrary $\mathcal{L}(F,G)$ theory. Earlier, Novello \textit{et al.}~\cite{Novello} had found an equivalent result in a different way. Their results show that, with the exception of a few special cases, theories of the Pleba{\'n}ski class predict birefringence \textit{in vacuo}. For background material on birefringence, and bimetricity, we refer to Visser \textit{et al.}~\cite{Visser} and, for the particular case of the Heisenberg-Euler theory, to Dittrich and Gies~\cite{Dittrich} and to Shore~\cite{Shore}. Here we want to briefly sketch how the Fresnel equation of an arbitray $\mathcal{L} (F,G)$ theory can be derived with the help of an approximate-plane-wave ansatz. This is methodically different from the work of Obukhov and Rubilar~\cite{ObukhovRubilar2002} and Novello \textit{et al.}~\cite{Novello} but it leads to the same result. The general method goes back to Luneburg and is outlined, e.g., for electrodynamics in ordinary media, in the book by Kline and Kay~\cite{KlineKay1965}. For a discussion in a more general context, which includes the case to be considered here, we refer to Perlick~\cite{Perlick2011}. We consider a one-parameter family of electromagnetic fields of the form \begin{equation}\label{eq:pw} \begin{split} &F^{'ab} (x^m)=F^{ab} (x^m) \\ &\quad\quad+ \mathrm{Re} \left\{ e^{-i\psi(x^m)/\lambda} \sum\limits_{N=1}^\infty \left(\lambda^N F_N^{ab} (x^m)\right) \right\} \,, \end{split} \end{equation} where $F^{ab}$ is a given background field. $\lambda$ is a real bookkeeping parameter that is introduced in a way such that the high-frequency limit corresponds to $\lambda \to 0$. The summation sign in (\ref{eq:pw}) is to be understood in the sense of an asymptotic series and \emph{not} in the sense of a convergent series. While the amplitudes $F_N^{ab}$ are in general complex, the \emph{eikonal function} $\psi$ is real. It gives the surfaces of constant phase, $\psi(x^m)=\psi(x^\mu,t)=\text{constant}$. In 3-space, the normal to these surfaces is \begin{equation} n_\alpha=\frac{\partial_\alpha\psi}{\sqrt{\left(\partial_\beta\psi\right) \left(\partial^\beta\psi\right)}}\,. \end{equation} The phase velocity $v_{\text{P}}^{\alpha}$ can be introduced as the 3-vector that gives the traveling speed of such a surface in the direction of its normal, \begin{equation}\label{eq:vP} v_{\mathrm{P}}^{\alpha} = - \frac{\partial ^{\alpha} \psi } {\left(\partial_\beta\psi\right)\left(\partial^\beta\psi\right)} \, \dfrac{\partial \psi}{\partial t}\,. \end{equation} Feeding the ansatz (\ref{eq:pw}) into Maxwell's equations and comparing equal powers of $\lambda$ gives a hierarchy of equations. In the lowest nontrivial order, which is known as the geometric optics approximation, one gets a first-order partial differential equation for $\psi$ which is the desired characteristic equation. If this program is carried through for an $\mathcal{L}(F,G)$ theory, one finds the following result which is in agreement with Obukhov and Rubilar's~\cite{ObukhovRubilar2002}. The characteristic equation reads \begin{equation}\label{eq:Q} \begin{split} &\mathcal{L}_F \Big\{ M\eta^{ij}\eta^{kl}+N\eta^{ij}F^{km}F^{l}_{\phantom lm} \\ &\quad\qquad +PF^{im}F^{j}_{\phantom jm}F^{kn}F^{l}_{\phantom ln} \Big\} p_ip_jp_kp_l=0 \end{split} \end{equation} where $p_i=\partial_i\psi$ and \begin{equation} \begin{split} &M=\mathcal{L}_F^2+2\,\mathcal{L}_F \mathcal{L}_{FG}\,G -\frac12\,\mathcal{L}_F\mathcal{L}_{GG}\,F \\ &\quad\qquad+\left(\mathcal{L}^2_{FG} -\mathcal{L}_{FF}\mathcal{L}_{GG}\right)G^2 \, , \\ &N=2\,\mathcal{L}_F \mathcal{L}_{FF} +\frac12\,\mathcal{L}_F\mathcal{L}_{GG} \\ &\quad\qquad+\left(\mathcal{L}^2_{FG} -\mathcal{L}_{FF}\mathcal{L}_{GG}\right)F \, , \\ &P=\mathcal{L}_{FF}\mathcal{L}_{GG}-\mathcal{L}^2_{FG}\,. \end{split} \end{equation} If $M$ has no zeros, (\ref{eq:Q}) can be factorized as \begin{equation}\label{eq:a1a2} \mathcal{L}_F \, M \, \big( a_1^{ij}p_ip_j \big) \big( a_2^{k \ell} p_k p_{\ell} \big) = 0 \end{equation} where \begin{equation}\label{eq:aA} a_A^{ik} = \eta^{ik}+\sigma_A F^{im}F^k_{\phantom km} \end{equation} for $A=1,2$ and \begin{equation}\label{eq:sigmaA} \sigma_{1/2} = \frac{N}{2M}\pm\sqrt{\frac{N^2}{4M^2}-\frac{P}{M}} \, . \end{equation} In the following we restrict ourselves to Lagrangians such that $M$ and $\mathcal{L}_F$ have no zeros. This excludes some degenerate cases which are hardly of physical interest. Then the characteristic equation is equivalent to (which is up to conformal transformations in agreement with the results of Novello \textit{et al.}\cite{Novello}) \begin{equation}\label{eq:cones} a_A^{ik}p_ip_k=0 \, , \quad A=1,2 \, \end{equation} and is sometimes called the "light-cone condition" (compare for example~\cite{Dittrich}). Generalizing a standard terminology from electrodynamics in media, $a_1^{ik}$ and $a_2^{ik}$ are called the \emph{optical metrics} of the vacuum in the $\mathcal{L}(F,G)$ theory. If the two optical metrics do not coincide, i.e., if $\sigma _1 \neq \sigma _2$, there is birefringence in vacuum. If one considers the next order in the above-mentioned hierarchy of equations, one sees that the case $A=1$ and the case $A=2$ correspond to two different polarization directions. Note that $\sigma_{1}$ and $\sigma_2$ are always real, because \begin{gather}\label{eq:sigmareal} N^2-4MP= \\ \nonumber \Big( 2 \mathcal{L}_F \mathcal{L}_{FF} - \dfrac{1}{2} \mathcal{L}_F \mathcal{L}_{GG} - PF \Big) ^2 + 4 \Big( \mathcal{L}_F \mathcal{L}_{FG}-PG \Big) ^2 \end{gather} is a sum of two squares, and that $\sigma _1$ and $\sigma _2$ depend only on the two field invariants $F$ and $G$. In the standard vacuum Maxwell theory we have $\sigma _1 = \sigma _2 =0$, so these two functions characterize the deviation of our $\mathcal{L}(F,G)$ theory from the standard theory at the level of geometric optics. If the Lagrangian is of the special form $\mathcal{L}(F,G)=\mathcal{L}(\alpha F+\beta G)$ with some constant factors $\alpha$ and $\beta$, one has $P= 0$ and therefore $\sigma_{1}=0$, i.e., one polarization mode behaves as in the standard Maxwell vacuum theory. This is true, in particular, if the Lagrangian is independent of $G$. (It is also true if the Lagrangian is independent of $F$ but this case was excluded by our assumption $\mathcal{L}_F \neq 0$.) Also, it is interesting to remark that two Lagrangians $\mathcal{L}$ and $\mathcal{L}+\beta G$ give the same characteristic equation, i.e. the two cases are not distinguishable at the level of geometrical optics. Of course, if one restricts to parity invariant Lagrangians adding a term of the form $\beta G$ is forbidden. For some of our applications it will be desirable to write the optical metrics in terms of the excitation, rather than in terms of the field strength. It is then recommendable to start from a Hamiltonian formulation. It was mentioned already at the end of Sec.~\ref{subsec:Pleb} that the Pleba{\'n}ski class of theories can be written in terms of a Hamiltonian $\mathcal{H}(R,S)$ rather than in terms of a Lagrangian $\mathcal{L}(F,G)$. In the Appendix we derive some replacement rules of how the relevant Hamiltonian expressions can be found from the Lagrangian expressions. By applying these replacement rules, we find that the optical metrics can be rewritten as \begin{equation}\label{eq:aAH} a{}_A^{ik} = \eta^{ik}+\hat\sigma_A \tilde{H}{}^{im} \tilde{H}{}^k_{\phantom km} \end{equation} where \begin{equation}\label{eq:sigmaHam} \hat{\sigma}{}_A= \frac{\hat N}{2\hat M}\pm\sqrt{\frac{\hat N^2}{4\hat M^2}-\frac{\hat P}{\hat M}} \end{equation} with the abbreviations \begin{equation} \begin{split} &\hat M=\mathcal{H}_R^2+2\, \mathcal{H} _R \mathcal{H}_{RS} \,S-\frac12\,\mathcal{H}_R \mathcal{H}_{SS}\,R \\ & \quad\qquad+ \left(\mathcal{H}^2_{RS}-\mathcal{H}_{RR} \mathcal{H}_{SS}\right)S^2 \, , \\ &\hat N=2\,\mathcal{H}_R \mathcal{H}_{RR} +\frac12\,\mathcal{H}_R \mathcal{H}_{SS} \\ &\quad\qquad + \left(\mathcal{H}^2_{RS}-\mathcal{H}_{RR}\mathcal{H}_{SS}\right)R \, ,\\ &\hat P=\mathcal{H}_{RR} \mathcal{H}_{SS}- \mathcal{H}^2_{RS}\,. \end{split} \end{equation} In the following it will be convenient to use the three-vector notation of (\ref{eq:EB}) and (\ref{eq:DH}). We decompose each of the 3-vectors $\boldsymbol{E}$, $\boldsymbol{B}$, $\boldsymbol{D}$, and $\boldsymbol{H}$ into amplitude and direction, \begin{equation}\label{eq:vw} \begin{split} &\boldsymbol B (x^\mu,t) =B(x^\mu,t)\,\boldsymbol v(x^\mu,t)\, ,\\[0.1cm] &\boldsymbol E (x^\mu,t) =E(x^\mu,t)\,\boldsymbol w(x^\mu,t)\, ,\\[0.1cm] &\boldsymbol H (x^\mu,t) =H(x^\mu,t)\,\boldsymbol r(x^\mu,t)\, ,\\[0.1cm] &\boldsymbol D (x^\mu,t) =D(x^\mu,t)\,\boldsymbol s(x^\mu,t) \, , \end{split} \end{equation} where $|\boldsymbol v|=|\boldsymbol w|=|\boldsymbol r|=|\boldsymbol s|=1$. The spatial and temporal parts of $F^{im}F^k_{\phantom km}p_ip_k$, which enter into (\ref{eq:a1a2}), can then be written as \begin{gather} \nonumber F^{\alpha m}F^\beta_{\phantom \beta m}p_\alpha p_\beta = B^2\left[\boldsymbol p \cdot \boldsymbol p-\left(\boldsymbol v \cdot \boldsymbol p\right)^2\right]-E^2\left(\boldsymbol w \cdot \boldsymbol p\right)^2 \\ \label{eq:FF} F^{4 m}F^\beta_{\phantom \beta m}p_\beta = B\,E\, \boldsymbol{p} \cdot (\boldsymbol{w}\times\boldsymbol{v}) \\[0.1cm] \nonumber F^{4 m}F^4_{\phantom 4 m} = E^2 \, . \end{gather} Similarly, \begin{gather} \nonumber \tilde H{}^{\alpha m}\tilde H{}^{\beta}_{\phantom \beta m}p_\alpha p_\beta= D^2\left[\boldsymbol p \cdot \boldsymbol p-\left(\boldsymbol s \cdot \boldsymbol p\right)^2\right]-H^2\left(\boldsymbol r \cdot \boldsymbol p\right)^2 \\ \label{eq:FFt} \tilde H{}^{4 m}\tilde H{}^{\beta}_{\phantom \beta m}p_\beta = D\,H\,\boldsymbol{p} \cdot (\boldsymbol{s}\times\boldsymbol{r}) \\[0.1cm] \nonumber \tilde H{}^{4 m}\tilde H{}^{4}_{\phantom 4 m} = H^2 \end{gather} which will be used later. \subsection{Rayvelocity and Hamilton equations for the rays} Interpreting $Q_A = a_A^{ik}p_ip_k$ as a Hamiltonian, the characteristic partial differential equation $a_A^{ik}\partial _i \psi \partial _k \psi =0$ can be viewed as a Hamilton-Jacobi equation. The corresponding set of Hamilton equations, or canonical equations, determines the \emph{bicharacteristic curves} or \emph{rays}. For background material on the notions of characteristics and bicharacteristics we refer to Courant and Hilbert \cite{CourantHilbert1962}. The rays are defined with respect to $Q_1$ and $Q_2$ separately, i.e., they depend on the polarization. The canonical equations read \begin{equation} \frac{\mathrm dx^a}{\mathrm ds}=\frac{\partial Q_A}{\partial p_a}\, , \quad \frac{\mathrm dp_a}{\mathrm ds}=-\frac{\partial Q_A}{\partial x^a} \, . \end{equation} Here $s$ is a parameter along the rays which has no obvious physical meaning. In the following it will be convenient to reparametrize the rays by the time coordinate $t$, cf.~\cite{CourantHilbert1962}. In order to do this, we have to assume that the rays of the Hamiltonian $Q_A$ are causal (i.e., timelike or lightlike) with respect to the Minkowski background metric. It was shown by Obukhov and Rubilar~\cite{ObukhovRubilar2002} that the optical metrics are always of Lorentzian signature, provided that we exclude the pathological cases where they degenerate. However, no convenient criterion on the Lagrangian $\mathcal{L}(F,G)$ seems to be known that guarantees causality of the rays with respect to the background metric. We will investigate this question in a separate paper; here we just restrict our discussion, from now on, to Lagrangians where the rays of the optical metrics are causal with respect to the background Minkowski metric. Then it is guaranteed that $a_A^{44} < 0$ and we may write the optical metrics as \begin{equation}\label{eq:fp} a_A^{ik}p_ip_k = \frac{a_A^{44}}{c^2} (c\,p_4+H_A^+)(c\,p_4+H_A^-) \end{equation} where \begin{equation}\label{eq:HA} H_A^{\pm} = \, c \, \left( \frac{a_A^{\alpha 4}p_\alpha}{a_A^{44}}\pm \sqrt{\left(\frac{a_A^{\alpha 4}p_\alpha}{a_A^{44}}\right)^2- \frac{a_A^{\alpha\beta}p_\alpha p_\beta}{a_A^{44}}} \, \right)\, . \end{equation} Equation (\ref{eq:fp}) corresponds to splitting the null cone of the optical metric $a_A^{ik}$ into a future and a past cone. If we restrict our work here to future-oriented rays, we can write the characteristic equation as \begin{equation}\label{eq:HAp} c \, p_4 \, + \, H_A^+ \, = \, 0 \end{equation} and the canonical equations read \begin{equation}\label{eq:Halpha} \frac{\mathrm dx^\alpha}{\mathrm dt}\, = \, \frac{\partial H^+_A}{\partial p_\alpha} \, , \quad \frac{\mathrm dp_\alpha}{\mathrm dt} \, = \, - \, \frac{\partial H^+_A}{\partial x^\alpha} \, , \end{equation} \begin{equation}\label{eq:H4} \frac{\mathrm dx^4}{\mathrm dt}=c \, , \quad \frac{\mathrm dp_4}{\mathrm dt}=- \,\frac{\partial H_A^+}{\partial x^4} \, . \end{equation} \noindent If $a_A^{ik}$ is known, integration of (\ref{eq:Halpha}) gives the spatial paths of the rays. The first equation of (\ref{eq:H4}) says that the new parameter $t$ coincides with the coordinate time, while the second equation gives the change of the frequency of light. The ray velocity can be read from (\ref{eq:Halpha}), \begin{equation} v_{\mathrm{S}}^\alpha:=\frac{\mathrm dx^\alpha}{\mathrm dt}=\frac{\partial H^+_A}{\partial p_\alpha}\,. \end{equation} \noindent The phase velocity (\ref{eq:vP}) can be rewritten in terms of the Hamiltonian as \begin{equation}\label{eq:vp2} v_{\mathrm{P}}^\beta =-\frac{c\,p_4}{{p_\alpha p^\alpha}}\,p^{\beta}= \frac{H^+_A}{{p_\alpha p^\alpha}}\,p^{\beta}\,. \end{equation} Phase and ray velocity coincide if and only if \begin{equation} \frac{H_A^{+}}{p_\alpha p^\alpha}\,p^\beta=\frac{\partial H_A^+}{\partial p_\beta} \end{equation} which is true if and only if $H^+_A$ is of the form \begin{equation}\label{GeschBeding} H_A^+ \, = \, f(x^\mu,ct) \, \sqrt{p_{\alpha} p^{\alpha}} \end{equation} where $f$ is any function of the space-time coordinates. Equation (\ref{GeschBeding}) is satisfied in the usual vacuum theory of Maxwell but not in general in other $\mathcal{L}(F,G)$ theories. Note that (\ref{GeschBeding}) implies \begin{equation} \frac{\mathrm dx^\alpha}{\mathrm dt}=\frac{\partial H_A^+}{\partial p_\alpha} =\frac{f(x^{\mu},ct) \, p^\alpha}{\sqrt{p^\beta p_\beta}}\,, \end{equation} i.e., the condition $v^\alpha_{\mathrm{S}}=v^\alpha_{\mathrm{P}}$ can hold only if $dx^{\alpha}/dt$ and $p^{\alpha}$ are parallel. \subsection{Parallel electric and magnetic fields}\label{subsec:par} We consider now the special case that $\boldsymbol{E}$ and $\boldsymbol{B}$ are parallel, i.e., that $\boldsymbol{v} = \boldsymbol{w}$ in the notation of (\ref{eq:vw}). This case covers, of course, in particular the situation that one of the two field strengths, $\boldsymbol{E}$ or $\boldsymbol{B}$, is zero. With the aid of the transformation (\ref{DualRotTab}), described in the Appendix, we will then discuss, at the end of this section, the case that the excitations $\boldsymbol{D}$ and $\boldsymbol{H}$ are parallel. If we specialize (\ref{eq:FF}) to the case $\boldsymbol{v} = \boldsymbol{w}$ and insert the result into (\ref{eq:aA}), the optical metrics read \begin{equation}\label{eq:QApar} \begin{split} &a_A^{ik} p_i p_k = - (1- \sigma_A E^2) p_4^2 +(1 + \sigma_A B^2) \boldsymbol{p}^2 \\ &\qquad\qquad\qquad\qquad\quad\qquad -\sigma _A (B^2+E^2)(\boldsymbol{w} \cdot \boldsymbol{p})^2 \, . \end{split} \end{equation} Hence, the Hamiltonian $H_A^+$ from (\ref{eq:HA}) simplifies to \begin{equation}\label{eq:HApar} H_A^+ = c \, \sqrt{ \dfrac{(1+\sigma _A B^2)}{(1-\sigma _A E^2)} \Big( | \boldsymbol{p} | ^2 - ( \boldsymbol{w} \cdot \boldsymbol{p})^2 \Big) + ( \boldsymbol{w} \cdot \boldsymbol{p} )^2 \;} \end{equation} and the phase velocity (\ref{eq:vp2}) reads \begin{equation}\label{eq:vppar} v_{\mathrm{P}} = c \, \sqrt{ \dfrac{(1+\sigma _A B^2)}{(1-\sigma _A E^2)}\, \left( 1 - \dfrac{( \boldsymbol{w} \cdot \boldsymbol{p})^2}{ | \boldsymbol{p} |^2 }\right) + \dfrac{( \boldsymbol{w} \cdot \boldsymbol{p})^2}{ | \boldsymbol{p} |^2 }} \, . \end{equation} If we assume, in addition, that the unit vector of the background field is homogeneous, $\partial_\alpha w^\beta=0$, and that the amplitudes of the field strengths change only in the direction of $\boldsymbol p$, $\text{grad}\,B \propto \boldsymbol p$ as well as $\text{grad}\,E \propto \boldsymbol p$ , the canonical equations (\ref{eq:Halpha}) reduce to \begin{equation}\label{eq:Halphapar} \begin{split} &\frac{\mathrm dx^\alpha}{\mathrm dt}=\frac{c}{H^+_A} \left\{ \dfrac{(1+\sigma _A B^2)}{(1-\sigma _A E^2)} \Big( p^{\alpha} - ( \boldsymbol{w} \cdot \boldsymbol{p} )\, w^{\alpha} \Big) \right. \\ &\qquad\qquad\qquad\quad+\left.( \boldsymbol{w} \cdot \boldsymbol{p} ) \, w^{\alpha}\right\} \,;\qquad\frac{\mathrm dp_\alpha}{\mathrm dt}\propto p_\alpha\,. \end{split} \end{equation} The last equation implies that the direction of $p_\alpha$ is preserved along the ray. If additionally the background fields are static, $\partial \boldsymbol{E} / \partial t = \boldsymbol{0}$ and $\partial \boldsymbol{B} / \partial t = \boldsymbol{0}$, the second equation of (\ref{eq:H4}) reduces to \begin{equation}\label{eq:H4stat} \frac{\mathrm dp_4}{\mathrm dt}=0 \end{equation} which means that, in this case, the background fields do not change the frequency of light. We are now interested in the special case that (\ref{GeschBeding}) holds which guarantees that phase velocity and ray velocity are equal and that $dx^{\alpha}/dt$ is parallel to $p^{\alpha}$. As the direction of $p^{\alpha}$ is preserved, the ray must then be a straight line. There are two main cases where the Hamiltonian takes the form of (\ref{GeschBeding}). First, if $\boldsymbol{p} \, || \, \boldsymbol{w}$, we find from (\ref{eq:HApar}), (\ref{eq:vppar}) and (\ref{eq:Halphapar}) that $H_A^+=c \, | \boldsymbol{p}|$, $v_{\mathrm{P}} = c$, and $\mathrm{d}x^{\alpha}/ \mathrm{d} t = c \, p^{\alpha} / | \boldsymbol{p}|$, i.e., in this case the background fields have no effect. Second, if $\boldsymbol{p} \cdot \boldsymbol{w} =0$, one gets \begin{eqnarray} &&\label{eq:vpmax1}H_A^+ \, = \, c \, | \boldsymbol{p} | \, \sqrt{ \dfrac{1+\sigma _A B^2}{1-\sigma _A E^2}\,} \, , \\[0.1cm] &&\label{eq:vpmax3} v_{\mathrm{P}} \, = \, c \, \sqrt{ \dfrac{1+\sigma _A B^2}{1-\sigma _A E^2}} \, , \\[0.1cm] &&\label{eq:vpmax2}\frac{\mathrm dx^\alpha}{\mathrm dt}= c \, \sqrt{ \dfrac{1+\sigma _A B^2}{1-\sigma _A E^2}} \, \dfrac{p^{\alpha}}{|\boldsymbol{p}|} \, . \end{eqnarray} This is the case which is most appropriate for the proposed experiment, because in this case one achieves two goals: the rays do not deviate from a straight line but the phase velocity does change in comparison to the Maxwell standard vacuum theory. Now we go over to the case that $\boldsymbol{D}$ and $\boldsymbol{H}$ are parallel, i.e.., that $\boldsymbol r=\boldsymbol s$. With the help of (\ref{DualRotTab}) from the Appendix we find that in this case (\ref{eq:HApar}) and (\ref{eq:vppar}) have to be replaced with \begin{equation} H_A^+ = c \, \sqrt{ \dfrac{(1+ \hat{\sigma}{}_A D^2)}{(1- \hat{\sigma}{}_A H^2)} \Big( | \boldsymbol{p} | ^2 - ( \boldsymbol{s} \cdot \boldsymbol{p})^2 \Big) + ( \boldsymbol{s} \cdot \boldsymbol{p} )^2 \; } \end{equation} \begin{equation} v_{\mathrm{P}} = c \, \sqrt{ \dfrac{(1+\hat{\sigma} _A D^2)}{(1-\hat{\sigma} _A H^2)} \,\Big( 1 - \dfrac{( \boldsymbol{s} \cdot \boldsymbol{p})^2}{| \boldsymbol{p}|^2 } \Big) +\dfrac{( \boldsymbol{s} \cdot \boldsymbol{p})^2}{| \boldsymbol{p}|^2 } } \, . \end{equation} As above one gets for homogeneous and time-independent excitations \begin{equation} \begin{split} &\frac{\mathrm dx^\alpha}{\mathrm dt}=\frac{c}{H^+_A} \left\{ \dfrac{(1+\hat{\sigma} _A D^2)}{(1-\hat{\sigma} _A H^2)} \right.\\ &\qquad\qquad\qquad\quad\left.\times\Big( p^{\alpha} - (\boldsymbol{s} \cdot \boldsymbol{p} ) s^{\alpha} \Big) + ( \boldsymbol{s} \cdot \boldsymbol{p} ) \, s^{\alpha} \, \right\} \, ,\\ &\frac{\mathrm dp_\alpha}{\mathrm dt} \propto p_{\alpha} \, , \quad \frac{\mathrm dp_4}{\mathrm dt}=0 \,. \end{split} \end{equation} Again, the case that $\boldsymbol p$ is parallel to $\boldsymbol s$ leads to $v_{\mathrm{P}}=c$, so this case is of no interest for us. If, however, $\boldsymbol p\cdot\boldsymbol s=0$, we get \begin{eqnarray} &&H_A^+ \, = \, c \, | \boldsymbol{p} | \, \sqrt{ \dfrac{1+ \hat{\sigma}{}_A D^2}{1- \hat{\sigma}{}_A H^2}\,} \, , \\[0.1cm] &&\label{eq:vpmax4} v_{\mathrm{P}} \, = \, c \, \sqrt{ \dfrac{1+ \hat{\sigma}{}_A D^2}{1- \hat{\sigma}{}_A H^2}} \, , \\[0.1cm] &&\frac{\mathrm dx^\alpha}{\mathrm dt}= c \, \sqrt{ \dfrac{1+ \hat{\sigma}{}_A D^2}{1- \hat{\sigma}{}_A H^2}} \, \dfrac{p^{\alpha}}{|\boldsymbol{p}|} \end{eqnarray} and there is no deviation of a light ray from a straight line. \section{An interferometric experiment for testing nonlinear electrodynamics}\label{experiment} There are two ways in which Michelson interferometry can be used for testing nonlinear electrodynamics. First, a strong background field could be applied to the light beam in one arm of the interferometer. One would compare the situation where the background field is switched on with the situation where it is switched off, cf.~\cite{Boer}. Second, one could place the whole interferometer in a strong background field. One would then search for changes in the interference pattern if the interferometer is being rotated. The first possibility is reasonable if one thinks of a large interferometer, with an arm length of several meters at least. The second possibility is reasonable if one thinks of a tabletop interferometer. As an alternative to using a traditional Michelson interferometer, one could also use a pair of optical resonators as they have been used for high-precision Michelson-Morley experiments in recent years. As these resonators have a typical size of only a few centimeters, one would do the experiment with the whole instrument placed in a background field. With the resonators oriented perpendicularly to each other, one would then compare the situation where the field is switched on with the situation where it is switched off, or one would rotate the whole instrument with keeping the field switched on. In the following we first discuss the setup of the experiment where a traditional Michelson interferometer is used and the field is placed in one arm. This is the variant which brings out the basic idea of the experiment most clearly. Later in this section we discuss the other variants. Figure~\ref{MichelsonBild} shows the interferometer with the background field in the region denoted $BF$. The ray leaves the source $S$ and is divided at the semipermeable mirror $SPM$. After reflection at the mirrors $M_1$ and $M_2$, respectively, both parts interfere at $D$. If the background field is switched off, both parts always travel with the standard vacuum phase velocity $c$. If the background field is switched on, the part which travels along $l_2$ crosses the region $BF$ with a different phase velocity, according to nonlinear electrodynamics. This would lead to a change of the interference pattern. We consider the background field to be static, with one of the four fields $\boldsymbol{E}$, $\boldsymbol{B}$, $\boldsymbol{D}$, or $\boldsymbol{H}$ vanishing. Each of these four cases is covered by the calculations of the preceding section. We assume that the background field is perpendicular to the propagation direction of the light. We have seen that in this situation the ray does not deviate from a straight line. \begin{figure} \begin{center} \includegraphics[width=9cm]{Michelson_en_neu.pdf} \end{center} \caption{Experimental setup} \label{MichelsonBild} \end{figure} \vspace{0.2cm} Obviously the travel times of the ray along the different sections are given by \begin{equation} ct_1=l_1\,,\quad ct'_2=l'_2\,,\quad ct''_2=l''_2\,,\quad v_{\mathrm{P}} \, t_{BF}=l_{BF}\,. \end{equation} Without background field the phase velocity is equal to $c$ everywhere, including the region $BF$. The time delay $\Delta t_I$ of the two arms is therefore given by \begin{equation} \Delta t_I= 2(t_1-t'_2-t''_2-t_{BF})=\frac{2}{c}(l_1-l'_2-l''_2-l_{BF})\,. \end{equation} With background field the phase velocity in the region $BF$ is $v_{\mathrm{P}}$ which is, in general, different from $c$. The time delay $\Delta t_{II}$ of the two arms is therefore given by \begin{equation} \begin{split} \Delta t_{II}&= 2(t_1-t'_2-t''_2-t_{BF})\\ &=\frac{2}{c}(l_1-l'_2-l''_2-\frac{c}{v_{\mathrm{P}}}l_{BF})\,. \end{split} \end{equation} The change of the interference pattern is given by the time difference \begin{equation} \Delta t= \Delta t_{II}-\Delta t_I=\frac{2\,l_{BF}}{c}\left(1-\frac{c}{v_{\mathrm{P}}}\right)\,. \end{equation} This leads to a line shift of \begin{equation}\label{eq:Delta} \Delta= \frac{\omega \, \Delta t}{2 \, \pi} =\frac{\omega\,l_{BF}}{\pi \, c}\left(1-\frac{c}{v_{\mathrm{P}}}\right)\,. \end{equation} Here $\omega$ denotes the frequency of the light. Note that $\omega$ is a constant because the background field is assumed static. We evaluate the general result for each of the four cases $\boldsymbol{E=0}$, $\boldsymbol{B=0}$, $\boldsymbol{D=0}$ and $\boldsymbol{H=0}$. Note that in general $\boldsymbol{E=0}$ is \emph{not} equivalent to $\boldsymbol{D=0}$ and $\boldsymbol{B=0}$ is \emph{not} equivalent to $\boldsymbol{H=0}$. \begin{description} \item[a) Magnetostatic field strength $\boldsymbol{(E=0)}$] \hfill \parindent=-0.4cm From (\ref{eq:vpmax3}) we find that \begin{equation} \begin{split} v_{\mathrm{P}} &=c\,\sqrt{1+\sigma_{1/2}B^2}\\[0.1cm] &=c\left(1+\sigma_{1/2}(0)\frac{B^2}{2} + \, \dots \, \right) \end{split} \end{equation} \parindent=-0.4cm and hence, by (\ref{eq:Delta}), \begin{equation} \begin{split} \Delta &=\frac{\omega\,l_{BF}}{\pi \, c} \left(1-\frac{1}{\sqrt{1+\sigma_{1/2}B^2}}\right) \\[0.1cm] &=\frac{\omega \, l_{BF}\sigma_{1/2}(0)B^2}{2 \, \pi \, c} + \, \dots \, \end{split} \end{equation} \item[b) Electrostatic field strength $\boldsymbol{(B=0)}$] \hfill \parindent=-0.4cm From (\ref{eq:vpmax3}) we find that \begin{equation} \begin{split} v_{\mathrm{P}} &=c\,\frac{1}{\sqrt{1-\sigma_{1/2}E^2}}\\[0.1cm] &=c\left(1+\sigma_{1/2}(0)\frac{E^2}{2} + \, \dots \, \right ) \end{split} \end{equation} \parindent=-0.4cm and hence, by (\ref{eq:Delta}), \begin{equation} \begin{split} \Delta&=\frac{\omega\,l_{BF}}{c\,\pi}\left(1-\sqrt{1-\sigma_{1/2}E^2}\right)\\[0.1cm] &=\frac{\omega \, l_{BF}\sigma_{1/2}(0)E^2}{2 \, \pi \, c} + \, \dots \end{split} \end{equation} \item[c) Magnetostatic excitation $\boldsymbol{(D=0)}$] \hfill \parindent=-0.4cm From (\ref{eq:vpmax4}) we find that \begin{equation} \begin{split} v_{\mathrm{P}} &=c\,\frac{1}{\sqrt{1-\hat\sigma_{1/2}H^2}}\\[0.1cm] &=c\left(1+\hat\sigma_{1/2}(0)\frac{H^2}{2} + \, \dots \, \right) \end{split} \end{equation} \parindent=-0.4cm and hence, by (\ref{eq:Delta}), \begin{equation} \begin{split} \Delta&=\frac{\omega\,l_{BF}}{c\,\pi}\left(1-\sqrt{1-\hat\sigma_{1/2}H^2}\right)\\[0.1cm] &=\frac{\omega \, l_{BF}\hat\sigma_{1/2}(0)H^2}{2 \, \pi \, c} + \, \dots \end{split} \end{equation} \item[d) Electrostatic excitation $\boldsymbol{(H=0)}$] \hfill \parindent=-0.4cm From (\ref{eq:vpmax4}) we find that \begin{equation} \begin{split} v_{\mathrm{P}} &=c\,\sqrt{1+\hat\sigma_{1/2}D^2}\\[0.1cm] &=c\left(1+\hat\sigma_{1/2}(0)\frac{D^2}{2} + \, \dots \, \right) \end{split} \end{equation} \parindent=-0.4cm and hence, by (\ref{eq:Delta}), \begin{equation} \begin{split} \Delta&=\frac{\omega\,l_{BF}}{c\,\pi} \left(1-\frac{1}{\sqrt{1+\hat \sigma_{1/2}D^2}}\right) \\[0.1cm] &=\frac{\omega \, l_{BF}\hat\sigma_{1/2}(0)D^2}{2 \, \pi \, c} +\, \dots \end{split} \end{equation} \end{description} If one writes $X$ for $E$, $B$, $D$, or $H$, one can combine all results up to first order in the form \begin{equation}\label{Allg1OrdPhasenG} v_{\mathrm{P}}=c\left(1+\overset{X}\sigma_{1/2}(0) \frac{X^2}{2} + \, \dots \, \right) \end{equation} and \begin{equation}\label{Allg1OrdDelta} \Delta=\frac{\omega \, l_{BF} \overset{X}\sigma_{1/2}(0)X^2}{2 \, \pi \, c}+ \, \dots \end{equation} Here $\overset{X}\sigma_{1/2}(0)$ denotes either $\sigma_{1/2}(0)$ or $\hat\sigma_{1/2}(0)$, depending on whether $X$ is a field strength or an excitation. Note that $2\pi c/\omega$ is the wavelength in the case of a vanishing background field. According to nonlinear electrodynamics, the wavelength changes when the ray travels through the background field $BF$. This means that, if one substitutes the angular frequency $\omega$ by the wavelength $\lambda=2\pi c/\omega$, \begin{equation}\label{Allg1OrdDelta_lambda} \Delta=\frac{l_{BF}\overset{X}\sigma_{1/2}(0)X^2}{\lambda}+ \, \dots \, \,, \end{equation} one has to keep in mind that $\lambda$ is not the wavelength of the light when passing through the background field but of the light when emitted by the source. The results of this section can also be applied to the case where the whole interferometer is inside the background field. Here one does not switch on and off the background field but rotates the interferometer by $90\degree$ so that in the initial position the first arm is orthogonal to the field and the second arm is parallel to the field while in the end position it is vice versa. Then one gets instead of the preceding formulas the following ones: \begin{equation} \Delta t=\frac{2\left(l_1+l_2\right)}{c}\left(1-\frac{c}{v_p}\right) \, , \end{equation} \begin{equation} \Delta=\frac{\omega \, \left(l_1+l_2\right) \overset{X}\sigma_{1/2}(0)X^2}{2 \, \pi \, c}+ \, \dots \end{equation} This means that one has to replace $l_{BF}$ by $l_1+l_2$ in all formulas to go from the fist setup to the second one. As an alternative to using a traditional Michelson interferometer with two arms, we will now discuss a setup with optical resonators as it has been used frequently in recent years for high-precision Michelson interferometry; see e.g. \cite{SH} and the references therein. Here one uses a laser which is stabilized to the eigenfrequency $\nu _{\mathrm{eigen}} = N v_{\mathrm{P}}/(2L)$ of an optical resonator, where $N$ is the mode number, $v_{\mathrm{P}}$ is the phase velocity of light and $L$ is the length of the resonator. The quality of a resonator is determined by its finesse $F$, typically $F = 100 \, 000$. In a figurative way, a resonator may be viewed as equivalent to a traditional interferometer whose arm length is folded $F$ times. If $v_{\mathrm{P}}$ and $L$ undergo a change, the eigenfrequency of the resonator and therefore the frequency of the stabilized laser changes as \begin{equation} \frac{\delta\nu}{\nu}=\frac{\delta v_{\mathrm{P}}}{v_{\mathrm{P}}} -\frac{\delta L}{L}\,. \end{equation} If the resonator is put into a homogeneous and static $\boldsymbol{E}$, $\boldsymbol{B}$, $\boldsymbol{D}$, or $\boldsymbol{H}$ field, with its axis perpendicular to the field, the phase velocity of light changes according to \begin{equation} \frac{\delta v_{\mathrm{P}}}{v_{\mathrm{P}}} \approx \overset{X}\sigma_{1/2}(0)\frac{X^2}{2}\,. \end{equation} if we use the approximations of (\ref{Allg1OrdPhasenG}). As a direct measurement of $\delta \nu$ is not possible, one superimposes to the first laser a second reference laser stabilized to the eigenfrequency $\nu _{\mathrm{ref}}$ of a resonator with (ideally) the same physical characteristics as the first one. Then the difference of the frequencies $\Delta \nu := \nu _{\mathrm{eigen}} - \nu _{\mathrm{ref}}$ appears as the carrier frequency of the resulting beat. If the second resonator is oriented parallel to the background field and thus not influenced by it, this means that $\Delta \nu = \delta \nu$. For a theoretical discussion of the effect, we assume that $L$ is not changed if the background field is applied. (Of course, for a practical realization of the experiment one has to take into account that the material of the resonator is influenced, e.g., by magnetostriction, but we ignore this here.) Then \begin{equation} \frac{\delta \nu}{\nu}= \frac{\delta v_{\mathrm{P}}}{v_{\mathrm{P}}} \approx \overset{X}\sigma_{1/2}(0)\frac{X^2}{2}\,. \end{equation} In \cite{SH}, by averaging over many measurements it was possible to determine $\delta \nu / \nu$ with an accuracy of $10^{-17}$. If we assume that the same accuracy can be reached in the experiment proposed here, a measurable effect requires that X satisfies \begin{equation}\label{eq:nuacc} \frac{\delta v_{\mathrm{P}}}{v_{\mathrm{P}}} \approx \overset{X}\sigma_{1/2}(0)\frac{X^2}{2} \approx 10^{-17} \, . \end{equation} In the next section we discuss the perspectives of performing such an experiment as a test of particular theories of the Pleba{\'n}ski class. We compare the setup with the background field placed in one arm of a big Michelson interferometer with the setup using optical resonators. In the following, we refer to the first one as the "large-scale experiment" and to the second one as the "small-scale experiment." \section{Application to special theories of the Pleba{\' n}ski class}\label{konkretes} \subsection{Born-Infeld theory}\label{subsec:BI} In the case of the Born-Infeld theory, Lagrangian and Hamiltonian are given by~\cite{BornInfeld1934} \begin{equation}\label{BILagrangeFkt1} \mathcal{L}(F,G)=-b_0^2\sqrt{1+\frac{F}{b_0^2}-\frac{G^2}{b_0^4}}+b_0^2 \, , \end{equation} \begin{equation}\label{BILagrangeFkt2} \mathcal{H}(R,S)=b_0^2\sqrt{1+\frac{R}{b_0^2}-\frac{S^2}{b_0^4}}-b_0^2 \, , \end{equation} where $b_0$ is a new constant of Nature with the dimension of a field strength. The constitutive law reads \begin{equation}\label{BIFeldstaerke1} H^{ab}=\frac{F^{ab}- \frac{G}{b_0^2} \tilde F{}^{ab}}{\sqrt{1+\frac{F}{b_0^2}-\frac{G^2}{b_0^4}}} \end{equation} which can be solved for the field strength, \begin{equation}\label{BIFeldstaerke2} F^{mn}=\frac{H^{mn}+\frac{S}{b_0^2}\,\tilde H{}^{mn}}{\sqrt{1+\frac{R}{b_0^2}-\frac{S^2}{b_0^4}}}\,. \end{equation} The invariants $R$ and $S$ are given in terms of $F$ and $G$ by \begin{gather} \frac{R}{b^2_0}=\frac{-\frac{F}{b_0^2}+4\frac{G^2}{b_0^4} +\frac{F}{b_0^2}\frac{G^2}{b_0^4}}{1+\frac{F}{b_0^2}-\frac{G^2}{b_0^4}} \, ,\\[0.1cm] S=-G \, , \end{gather} which implies \begin{equation} \frac{1+\frac{F}{b_0^2}-\frac{G^2}{b_0^4}}{1+\frac{G^2}{b_0^4}} =\frac{1+\frac{S^2}{b_0^4}}{1+\frac{R}{b_0^2}-\frac{S^2}{b_0^4}}\, . \end{equation} This leads to \begin{eqnarray} \nonumber&&\mathcal{L}_{FF}=-\frac{2\,\mathcal{L}_F^3G}{b_0^2}\,, \quad \mathcal{L}_{FG}=\frac{4\,\mathcal{L}_F^3G}{b_0^4}\,,\\ &&\mathcal{L}_{GG}=-\frac{2\,\mathcal{L}_F}{b_0^2}-\frac{8\,\mathcal{L}_F^3G^2}{b_0^2}\, ,\\ \nonumber \quad&&\mathcal{H}_{RR}=-\frac{2\,\mathcal{H}_R{}^3S}{b_0^2}\,, \quad \mathcal{H}_{RS}=\frac{4\,\mathcal{H}_R{}^3S}{b_0^4}\,,\\ &&\mathcal{H}_{SS}=-\frac{2\,\mathcal{H}_R}{b_0^2}-\frac{8\,\mathcal{H}_R{}^3 S^2}{b_0^2}\, . \end{eqnarray} Therefore one gets for the functions $\sigma_{1/2}$ and $\hat\sigma_{1/2}$, which give the deviation from the standard Maxwell vacuum theory, \begin{equation}\label{eq:sigmaBI} \begin{split} &\sigma_1=\sigma_2=-\frac{1}{b_0^2+F}=-\frac{1}{b_0^2}+\cdots\\ \quad&\hat \sigma_1=\hat \sigma_2=-\frac{1}{b_0^2+R}=-\frac{1}{b_0^2}+\, \dots \end{split} \end{equation} It is worth noticing that $\sigma_1=\sigma_2$ holds not only in the Born-Infeld theory and in the standard vacuum Maxwell theory but also in any other theory whose Lagrangian differs only by a term linear in G from them. (Such theories, however, are often excluded because they are not invariant under parity transformations.) Additionally one can calculate the phase velocity and the line shift for the four static cases: \begin{description} \item[Cases a) with $\mathbf{E=0}$ and d) with $\mathbf{H=0}$] \quad\\ Here we discuss the case of a magnetostatic field strength and the case of an electrostatic excitation together. If we use the abbreviation $Y=B,D$, we find \begin{equation} \overset{Y}\sigma_1=\overset{Y}\sigma_2=\frac{-1}{b_0^2+Y^2} \, , \end{equation} hence \begin{equation} v_{\mathrm{S}}=v_{\mathrm{P}}= \dfrac{c}{\sqrt{1+\frac{Y^2}{b_0^2}}} = c\left(1-\frac{Y^2}{2b_0^2}+\cdots\right) . \!\!\!\!\!\!\! \end{equation} The limit $Y\rightarrow0$ yields $v_{\mathrm{S}}=v_{\mathrm{P}}\rightarrow c$ as it has to. By contrast, the limit $Y\rightarrow \infty$ yields $v_{\mathrm{S}}=v_{\mathrm{P}}\rightarrow 0$, so one may say that the background field $Y$ slows down the light ray. There is no upper bound for $Y=B,D$. The line shift is given by \begin{equation} \begin{split} \Delta&=\frac{2l_{BF}}{\lambda}\left(1-\sqrt{1+\frac{Y^2}{b_0^2}} \right)\\ &=-\frac{l_{BF}}{\lambda}\frac{Y^2}{b_0^2}+\, \dots \end{split} \end{equation} \item[Cases b) with $\mathbf{B=0}$ and c) with $\mathbf{D=0}$] \quad\\ Here we discuss the case of an electrostatic field strength and the case of a magnetostatic excitation together. If we use the abbreviation $Z=E,H$, we find \begin{equation} \overset{Z}\sigma_1=\overset{Z}\sigma_2=\frac{1}{Z^2-b_0^2} \, , \end{equation} hence \begin{equation} v_{\mathrm{S}}=v_{\mathrm{P}}= c \, \sqrt{1 - \frac{Z^2}{b_0^2}} =c\left(1-\frac{Z^2}{2b_0^2}+\cdots\right). \!\!\!\!\!\!\!\!\!\! \end{equation} Again, the limit $Z\rightarrow0$ yields $v_{\mathrm{S}}=v_{\mathrm{P}} \rightarrow c$. In contrast to the case above this one leads to an upper bound for $Z$. This is obvious because for $Z\rightarrow b_0$ one gets $v_{\mathrm{S}}=v_{\mathrm{P}} \rightarrow 0$. In analogy to the other cases a background excitation slows down the light ray. For background fields $Z>b_0$ one gets an imaginary phase velocity, so one has to conclude that $Z\leq b_0\,.$ The line shift is given by \begin{equation} \begin{split} \Delta&=\frac{2l_{BF}}{\lambda} \left(1- \dfrac{1}{\sqrt{1 - \frac{Z^2}{b_0^2}}} \right) \\ &=-\frac{l_{BF}}{\lambda}\frac{Z^2}{b_0^2}+\, \dots \end{split} \end{equation} \end{description} We may combine the first-order approximations of all preceding cases into two formulas, one for the velocity and one for the line shift: \begin{equation} v_{\mathrm{S}}=v_{\mathrm{P}} \approx c\left(1-\frac{X^2}{2b_0^2}\right)\quad\text{and}\quad\Delta\approx-\frac{l_{BF}}{\lambda}\frac{X^2}{b_0^2}\,. \end{equation} To give an example we calculate the line shift for the large-scale experiment for some specific values. We assume an accuracy of about $10^{-6}$ line shifts. For $l_{BF}=100\,\mathrm{m}$ and $\lambda = 1000\,\mathrm{nm}$ one gets: \begin{equation} \Delta=-\frac{l_{BF}}{\lambda}\frac{X^2}{b_0^2}\approx-10^8 \,\frac{X^2}{b_0^2}\,. \end{equation} With these values one sees an effect if \begin{equation}\label{eq:Xest} X \gtrsim 10^{-7} \, b_0 \, . \end{equation} Born and Infeld conjectured that \begin{equation} b_0=\frac{e}{r_e^2} \approx 6 \times 10^{15}\,\frac{\sqrt{\mathrm g}}{\sqrt{\mathrm{cm}}\,\mathrm s} \end{equation} where $e$ is the electron charge and $r_e$ is the classical electron radius. Although this is only of historical interest, we remark that the corresponding line shift would be \begin{equation} \Delta\approx - \frac{10^{-22}}{36}\,X^2\,\frac{\mathrm{cm}\,\mathrm{s}^2}{\mathrm{g}}\,. \end{equation} If this were true we would need a field strength or an excitation of \begin{equation} X \gtrsim 6 \times 10^{8}\,\frac{\sqrt{\mathrm{g}}}{\sqrt{\mathrm{cm}}\,\mathrm{s}} \end{equation} to see an effect. For a magnetic field strength, $X = B$, this would correspond to $B_{\mathrm{SI}} \gtrsim 6 \times 10^4 T$ in SI units, see (\ref{eq:SI}). Clearly, this is not achievable in the foreseeable future. It is more interesting to see what lower bound on $b_0$ one could get from an experiment. Let us assume that \begin{equation}\label{eq:Xest2} X\approx \times 10^{4}\,\frac{\sqrt{\mathrm{g}}}{\sqrt{\mathrm{cm}}\,\mathrm{s}} \end{equation} which corresponds to $B_{\mathrm{SI}}= 1\, \mathrm{T}$ according to (\ref{eq:SI}). This is not an unrealistic value for a magnetic field to be produced in a laboratory. Then a null result of our large-scale experiment would imply, according to (\ref{eq:Xest}), that \begin{equation} b_0 \gtrsim 1 \times 10^{10}\,\frac{\sqrt{\mathrm{g}}}{\sqrt{\mathrm{cm}}\,\mathrm{s}}\,. \end{equation} For the small-scale experiment we suggest a lower field strength of $300\,\mathrm{mT}$ to prevent magnetorestriction. Then we find from (\ref{eq:nuacc}) and (\ref{eq:sigmaBI}) that \begin{equation} b_0 \gtrsim 7 \times 10^{11} \frac{\sqrt{\mathrm{g}}}{\sqrt{\mathrm{cm}}\,\mathrm{s}}\, \end{equation} which is almost 2 orders of magnitude better than the large-scale experiment. \subsection{Born's theory} Born's Langrangian~\cite{Born1933} differs from the Born-Infeld Lagrangian by omitting the $G^2$ term, \begin{equation} \mathcal{L}(F)=-b_0^2\sqrt{1+\frac{F}{b_0^2}}+b_0^2\,. \end{equation} This leads to an electrodynamical theory with birefringence, \begin{eqnarray} \sigma_1=0 \, , \qquad \sigma_2=\frac{-1}{b_0^2+F}\,. \end{eqnarray} From the viewpoint of geometrical optics Born's theory is a hybrid. One polarization mode behaves according to the standard vacuum Maxwell theory and the other one according to the Born-Infeld theory. This means that, if one filters the $\sigma_2$ rays out with a polarization filter, then one sees no difference to Maxwell, and if one filters the $\sigma_1$ rays out, then one sees no difference to Born-Infeld. As a consequence, the results of Sec.~\ref{subsec:BI} are also valid for the $\sigma_2$ rays in Born's theory. \subsection{Series expansions for electrodynamics with arbitrary Lagrangian}\label{ArbLag} If we are interested only in first-order deviations from Maxwell's theory, we may express the Langrangian in terms of a series expansion with respect to $F/A$ and $G/A$ up to second order, where $A$ is a constant with the dimension of a field strength squared. Introducing $A$ is necessary because only for dimensionless terms is it meaningful to say that they are small without referring to a particular system of units. In the Born-Infeld theory, e.g., we choose $A=b_0^2$. The series expansion of the Lagrangian reads \begin{equation}\label{AllemLagFkt} \begin{split} &\mathcal{L}=\alpha+\underset{1}\beta\,\frac{F}{A}+\underset{2}\beta\,\frac{G}{A}+\\ &\qquad\qquad+\underset{1}\gamma\left(\frac{F}{A}\right)^2 +\underset{2}\gamma \frac{F\,G}{A^2}+\underset{3}\gamma \left(\frac{G}{A}\right)^2+\, \dots \end{split} \end{equation} Note that $\underset{2}\beta$ and $\underset{2}\gamma$ are zero if the theory is invariant under parity transformations. One can assume the validity of the following bookkeeping system for the smallness of terms, where $\sim$ means that terms are of the same order. \begin{itemize} \item $\alpha\sim\underset{i}\beta\sim\underset{i}\gamma\cdots$ as well as $F^{mn} \sim \tilde{F}{}^{mn}$, hence $F\sim G$. \item $F/A$ and $G/A$ are dimensionless with $F/A\sim G/A$. \item From the first order of $R=R(F,G)$ and $S=S(F,G)$ [cf. (\ref{HamiltonBez_1a}) to (\ref{HamiltonBez_2d})] one gets $\underset{i}\beta F/A\sim AR/\underset{i}\beta$. \end{itemize} With the help of (\ref{HamiltonBez_1a}) to (\ref{HamiltonBez_2d})) one can now calculate $R$ and $S$ as series in $F$ and $G$. Additionally one can then calculate the inverted series, i.e. $F$ and $G$ as a series in $R$ and $S$. This step allows us then to calculate the Hamiltonian as a function of $R$ and $S$. The result of this calculation is \begin{equation} \begin{split} &\mathcal{H}=-\alpha+\underset{1}B\,\frac{R}{\hat A}+\underset{2}B\,\frac{S}{\hat A}+ \\ &\qquad\qquad+\underset{1}C\, \left(\frac{R}{\hat A}\right)^2 +\underset{2}C\,\frac{R\,S}{\hat A^2} +\underset{3}C\,\left(\frac{S}{\hat A}\right)^2+\cdots\,, \end{split} \end{equation} with the following coefficients: \begin{eqnarray} \nonumber&&\hat A^{-1}:=\frac{A}{\underset{2}{\beta }^2+4 \underset{1}{\beta }^2}\,;\quad\underset{1}B:=-\underset{1}{\beta }\,;\quad\underset{2}B:=\underset{2}{\beta }\,;\\ \nonumber&&\underset{1}C:=\left(-\underset{2}{\beta }^4 \underset{1}{\gamma }+2 \underset{1}{\beta } \underset{2}{\beta }^3 \underset{2}{\gamma }-4 \underset{1}{\beta }^2 \underset{2}{\beta }^2 \underset{3}{\gamma }+8 \underset{1}{\beta }^2 \underset{2}{\beta }^2 \underset{1}{\gamma }-\right.\\ \nonumber&&\qquad\qquad\left.-8 \underset{1}{\beta }^3 \underset{2}{\beta } \underset{2}{\gamma }-16 \underset{1}{\beta }^4 \underset{1}{\gamma }\right)\Big/\left(\underset{2}{\beta }^2+4 \underset{1}{\beta }^2 \right)^2\,;\\ &&\underset{2}C:=\left(-\underset{2}{\beta }^4 \underset{2}{\gamma }+4 \underset{1}{\beta } \underset{2}{\beta }^3 \underset{3}{\gamma }-16 \underset{1}{\beta } \underset{2}{\beta }^3 \underset{1}{\gamma }+24 \underset{1}{\beta }^2 \underset{2}{\beta }^2 \underset{2}{\gamma }-\right.\\ \nonumber&&\qquad\qquad\left.-16 \underset{1}{\beta }^3 \underset{2}{\beta } \underset{3}{\gamma }+64 \underset{1}{\beta }^3 \underset{2}{\beta } \underset{1}{\gamma }-16 \underset{1}{\beta }^4 \underset{2}{\gamma }\right)\Big/\left( \underset{2}{\beta }^2+4 \underset{1}{\beta }^2 \right)^2\,;\\ \nonumber&&\underset{3}C:=\left(-\underset{2}{\beta }^4 \underset{3}{\gamma }-8 \underset{1}{\beta } \underset{2}{\beta }^3 \underset{2}{\gamma }+8 \underset{1}{\beta }^2 \underset{2}{\beta }^2 \underset{3}{\gamma }-64 \underset{1}{\beta }^2 \underset{2}{\beta }^2 \underset{1}{\gamma }+\right.\\ \nonumber&&\qquad\qquad\left.+32 \underset{1}{\beta }^3 \underset{2}{\beta } \underset{2}{\gamma }-16 \underset{1}{\beta }^4 \underset{3}{\gamma }\right)\Big/\left( \underset{2}{\beta }^2+4 \underset{1}{\beta }^2 \right)^2\,. \end{eqnarray} This leads to some additions to the bookkeeping system: \begin{itemize} \item From $\underset{i}\beta F/A\sim AR/\underset{i}\beta$ one gets $F/A\sim R/\hat A$ as well as $G/A\sim S/\hat A$. \item It is easy to see that $\alpha\sim\underset{i}\beta\sim\underset{i}\gamma\cdots\sim\underset{i}B\sim\underset{i}C\cdots$ holds. \end{itemize} As a result of this bookkeeping system one sees that, if one can neglect in the Lagrangian terms of a certain order in $F/A$ and $G/A$, then one can neglect in the Hamiltonian terms of the same order in $R/\hat{A}$ and $S/\hat{A}$. For the case of a parity-invariant Lagrangian, $\mathcal{L}(F,G)=\mathcal{L}(F,-G)$, the coefficients of the Hamiltonian become very simple: \begin{equation} \begin{split} &\underset{1}B=-\underset{1}\beta\,;\quad \underset{2}B=-\underset{2}\beta=0\,;\quad\\ &\underset{1}C=-\underset{1}\gamma\,;\quad\underset{2}C=-\underset{2}\gamma=0\,;\quad \underset{3}C=-\underset{3}\gamma\,,\\ &\hat A=\frac{4\underset{1}\beta^2}{A}\,. \end{split} \end{equation} If, in addition, the first-order approximation of the Lagrangian coincides with the standard Maxwell one (\ref{eq:LagMax}) --- which is true if $\underset{1}\beta=- A/2$ --- one gets \begin{equation} \hat A=A\,. \end{equation} This is, in particular, the case for the theories of Born, Born-Infeld, and Heisenberg-Euler. From the Lagrangian (or the Hamiltonian, respectively) one gets the ``deviation coefficients" $\sigma_{1/2}$ in the zeroth order of approximation with respect to $F$ and $G$ (or $R$ and $S$, respectively): \begin{eqnarray} \nonumber\sigma_{1/2}(0)\,&&=\frac{2 \underset{1}{\gamma }}{A \underset{1}{\beta }}+\frac{ \underset{3}{\gamma }}{2 A \underset{1}{\beta }}\\ \label{sigma1OrdFl}&&\qquad\pm\frac{1}{2} \sqrt{\frac{16 \underset{1}{\gamma }^2+4 \underset{2}{\gamma }^2-8 \underset{1}{\gamma } \underset{3}{\gamma }+ \underset{3}{\gamma }^2}{A^2 \underset{1}{\beta }^2}}\\ \nonumber\hat\sigma_{1/2}(0)\,&&=\frac{2 \underset{1}{C}}{\hat A \underset{1}{B}}+\frac{\underset{3}{C}}{2 \hat A\underset{1}{B}}\\ \label{sigma1OrdEr}&&\qquad\pm\frac{1}{2} \sqrt{\frac{16 \underset{1}{C}^2+4 \underset{2}{C}^2-8 \underset{1}{C} \underset{3}{C}+ \underset{3}{C}^2}{\hat A{}^2 \underset{1}{B}^2}}\,. \end{eqnarray} Obviously the zeroth order approximation of $\sigma_{1/2}$ gives the first-order approximation of the optical metric for the deviation from Maxwell's theory and therefore also of the phase velocity. The Born-Infeld theory, e.g., yields $\sigma_1(0)=\sigma_2(0)=\hat\sigma_1(0)= \hat\sigma_2(0)=-1/b_0^2$, so one recovers the values calculated above. Additionally one sees that the approximation procedure does not destroy the absence of birefrigence in the given order of approximation. One gets the results for the four cases described in Sec.~\ref{experiment} if one feeds (\ref{sigma1OrdEr}) and (\ref{sigma1OrdFl}) into (\ref{Allg1OrdPhasenG}) and (\ref{Allg1OrdDelta}): \begin{description} \item[Magnetostatic field strength case ($E=0$)] \begin{equation} \begin{split} v_{\mathrm{P}} =c&+c\,\frac{B^2}{2}\left(\frac{2 \underset{1}{\gamma }}{A \underset{1}{\beta }}+\frac{ \underset{3}{\gamma }}{2 A \underset{1}{\beta }}\right.\\ &\qquad\left.\pm\frac{1}{2} \sqrt{\frac{16 \underset{1}{\gamma }^2+4 \underset{2}{\gamma }^2-8 \underset{1}{\gamma } \underset{3}{\gamma }+ \underset{3}{\gamma }^2}{A^2 \underset{1}{\beta }^2}}\right)+\, \dots \end{split} \end{equation} \begin{equation} \begin{split} \Delta=&\frac{l_{BF}\,B^2}{\lambda}\left(\frac{2 \underset{1}{\gamma }}{A \underset{1}{\beta }}+\frac{ \underset{3}{\gamma }}{2 A \underset{1}{\beta }}\right.\\ &\qquad\quad\left.\pm\frac{1}{2} \sqrt{\frac{16 \underset{1}{\gamma }^2+4 \underset{2}{\gamma }^2-8 \underset{1}{\gamma } \underset{3}{\gamma }+ \underset{3}{\gamma }^2}{A^2 \underset{1}{\beta }^2}}\right)+\, \dots \end{split} \end{equation} \item[Electrostatic field strength case ($B=0$)] \begin{equation} \begin{split} v_{\mathrm{P}}=c&+c\,\frac{E^2}{2}\left(\frac{2 \underset{1}{\gamma }}{A \underset{1}{\beta }}+\frac{ \underset{3}{\gamma }}{2 A \underset{1}{\beta }}\right.\\ &\qquad\left.\pm\frac{1}{2} \sqrt{\frac{16 \underset{1}{\gamma }^2+4 \underset{2}{\gamma }^2-8 \underset{1}{\gamma } \underset{3}{\gamma }+ \underset{3}{\gamma }^2}{A^2 \underset{1}{\beta }^2}}\right)+\, \dots \end{split} \end{equation} \begin{equation} \begin{split} \Delta=&\frac{l_{BF}\,E^2}{\lambda}\left(\frac{2 \underset{1}{\gamma }}{A \underset{1}{\beta }}+\frac{ \underset{3}{\gamma }}{2 A \underset{1}{\beta }}\right.\\ &\qquad\quad\left.\pm\frac{1}{2} \sqrt{\frac{16 \underset{1}{\gamma }^2+4 \underset{2}{\gamma }^2-8 \underset{1}{\gamma } \underset{3}{\gamma }+ \underset{3}{\gamma }^2}{A^2 \underset{1}{\beta }^2}}\right)+\, \dots \end{split} \end{equation} \item[Magnetostatic excitation case ($D=0$)] \begin{equation} \begin{split} v_{\mathrm{P}} =c&+c\,\frac{H^2}{2}\left(\frac{2 \underset{1}{C}}{\hat A \underset{1}{B}}+\frac{\underset{3}{C}}{2 \hat A\underset{1}{B}}\right.\\ &\,\,\left.\pm\frac{1}{2} \sqrt{\frac{16 \underset{1}{C}^2+4 \underset{2}{C}^2-8 \underset{1}{C} \underset{3}{C}+ \underset{3}{C}^2}{\hat A{}^2 \underset{1}{B}^2}}\right)+\, \dots \end{split} \end{equation} \begin{equation} \begin{split} \Delta=&\frac{l_{BF}\,H^2}{\lambda}\left(\frac{2 \underset{1}{C}}{\hat A \underset{1}{B}}+\frac{\underset{3}{C}}{2 \hat A\underset{1}{B}}\right.\\ &\quad\left.\pm\frac{1}{2} \sqrt{\frac{16 \underset{1}{C}^2+4 \underset{2}{C}^2-8 \underset{1}{C} \underset{3}{C}+ \underset{3}{C}^2}{\hat A{}^2 \underset{1}{B}^2}}\right)+\, \dots \end{split} \end{equation} \item[Electrostatic excitation case ($H=0$)] \begin{equation} \begin{split} v_{\mathrm{P}} =c&+c\,\frac{D^2}{2}\left(\frac{2 \underset{1}{C}}{\hat A \underset{1}{B}}+\frac{\underset{3}{C}}{2 \hat A\underset{1}{B}}\right.\\ &\,\,\left.\pm\frac{1}{2} \sqrt{\frac{16 \underset{1}{C}^2+4 \underset{2}{C}^2-8 \underset{1}{C} \underset{3}{C}+ \underset{3}{C}^2}{\hat A{}^2 \underset{1}{B}^2}}\right)+\, \dots \end{split} \end{equation} \begin{equation} \begin{split} \Delta=&\frac{l_{BF}\,B^2}{\lambda}\left(\frac{2 \underset{1}{C}}{\hat A \underset{1}{B}}+\frac{\underset{3}{C}}{2 \hat A\underset{1}{B}}\right.\\ &\quad\left.\pm\frac{1}{2} \sqrt{\frac{16 \underset{1}{C}^2+4 \underset{2}{C}^2-8 \underset{1}{C} \underset{3}{C}+ \underset{3}{C}^2}{\hat A{}^2 \underset{1}{B}^2}}\right)+\, \dots \end{split} \end{equation} \end{description} In principle it is easy to obtain further orders of approximation, but the resulting terms are expected to be very small and will not be worked out here. \subsection{The Heisenberg-Euler theory} Here we give an example for the procedure described in the preceding section. For small values of the field strength the Heisenberg-Euler theory can be described by the following Lagrangian~\cite{GD,HeisenbergEuler1936} which results from a series expansion with respect to $F$ and $G$: \begin{equation}\label{eq:HEL} \begin{split} &\mathcal{L}=E_0^2\left\{-\frac12\,\frac{F}{E_0^2}+ \Lambda\left(\frac{F^2}{E^4_0}+7\,\frac{G^2}{E^4_0}\right)\right\} \end{split} \end{equation} where \begin{gather} \Lambda=\frac{\hbar c}{90\pi e^2} = 0.7363 \\ E_0=\frac{m^2 c^4}{e^3} = 6.048 \times 10^{15}\,\frac{\sqrt{\mathrm g}}{\sqrt{\mathrm{cm}}\,\mathrm s} \, . \end{gather} Here $e$ is the electron charge, $m$ is the electron mass, $c$ is the speed of light and $\hbar$ is Planck's constant. So the coefficients in (\ref{AllemLagFkt}) are \begin{equation} \begin{split} &\alpha=0\,,\quad\underset{1}\beta=-\underset{1}B=-\frac{E_0^2}{2}\,,\quad\underset{2}\beta=0\,,\quad\\ &\underset{1}\gamma=-\underset{1}C=\Lambda\,E_0^2\,,\quad\underset{2}\gamma=-\underset{2}C=0\,,\quad\\ &\underset{3}\gamma=-\underset{3}C=7\Lambda\,E_0^2\,. \end{split} \end{equation} and \begin{equation} \hat A=A=E_0^2\, . \end{equation} Hence \begin{equation} \begin{split} &\sigma_1(0)=\hat\sigma_1(0)=-\frac{14\Lambda}{E_0^2}\,,\\ &\sigma_2(0)=\hat\sigma_2(0)=-\frac{8\Lambda}{E_0^2} \, . \end{split} \end{equation} For the four possibilities for the background field described in Sec.~\ref{experiment} we get, using again the abbreviation $X=E,D,B,H$, \begin{equation} \begin{split} v_{\mathrm{P}} (\sigma_1)&=c\left(1-\frac{7\Lambda\,X^2}{E_0^2}\right)+\cdots\\ v_{\mathrm{P}} (\sigma_2)&=c\left(1-\frac{4\Lambda\,X^2}{E_0^2}\right)+\cdots\,\\ \Delta(\sigma_1)&=-\frac{14 \,\Lambda\,l_{BF}X^2}{\lambda\,E_0^2}+\cdots\\ \Delta(\sigma_2)&=-\frac{8 \,\Lambda\,l_{BF}X^2}{\lambda\,E_0^2}+\, \dots \end{split} \end{equation} Using the same setup as before for the large-scale experiment, with $l_{BF}=100\,\mathrm{m}$ and $\lambda = 1000 \,\mathrm{nm}$ one gets \begin{equation} \begin{split} \Delta(\sigma_1)\approx - 2 \times 10^{-23}\,X^2\,\frac{\mathrm{cm}\,\mathrm{s}^2}{\mathrm{g}}\,, \\ \Delta(\sigma_2)\approx - 1\times 10^{-23}\,X^2\,\frac{\mathrm{cm}\,\mathrm{s}^2}{\mathrm{g}}\,. \end{split} \end{equation} Therefore one needs a field strength or an excitation of \begin{equation} X \gtrsim 3 \times 10^{8}\, \frac{\sqrt{\mathrm g}}{\sqrt{\mathrm{cm}}\,\mathrm s} \end{equation} to see any effect. This is clearly not achievable with present or near-future instruments. A similar calculation shows that for the small-scale experiment a field about 2 orders of magnitude smaller would be sufficient. However, even in this case one would need a field of more than $10^{6}\, \frac{\sqrt{\mathrm g}}{\sqrt{\mathrm{cm}}\,\mathrm s}\hat=100\,\mathrm{T}$ to see an effect. \section{Conclusions and Discussion} Since Born and Infeld created their ``new field theory'' of electromagnetism \cite{BornInfeld1934}, different nonlinear modifications of vacuum electrodynamics on the basis of a Lagrangian $\mathcal{L} (F,G)$ have been discussed, where usually one considers only those theories that reproduce the standard vacuum Maxwell theory in sufficiently weak fields. All these new electrodynamical theories have in common that they predict that light travels along the null cones of two optical metrics, one for each polarization state, where at least one of them differs from the vacuum Maxwell light-cone. At the same time they introduce at least one new dimensionfull constant of Nature. While in the standard vacuum Maxwell theory the superposition principle holds, this is no longer true in other $\mathcal{L}(F,G)$ theories. As a consequence, an electromagnetic background field would have an effect on the propagation of electromagnetic waves and thus, in particular, on the phase velocity of light. This is reflected by the fact that the optical metrics depend on the background field. The best technique for measuring small changes in the phase velocity of light with high accuracy is interferometry. In this paper we worked out the mathematical details for using interferometry as a test of $\mathcal{L}(F,G)$ theories. In cases where the constants of Nature that enter into the theory are known, as e.g. in the Heisenberg-Euler theory, an interferometric experiment could be used for confirming the theory by verifying the prediction. If instead the constants of Nature that enter into the theory are not known, as e.g. in the Born-Infeld theory, a null result of the experiment would give bounds on these constants. Our estimates demonstrate that, with realistic (magnetic) fields, an interferometric experiment could place significant bounds on the Born-Infeld constant $b_0$. Unfortunately, in the case of the Heisenberg-Euler theory our estimates seem to indicate that a confirmation of the theory is not realizable with electromagnetic fields that can be achieved in present-day experiments. However, it might be possible to considerably enhance the sensitivity by using time-dependent background fields, rather than the static fields we have considered for our numerical estimates. For the case of testing the Heisenberg-Euler theory with an interferometer of the size of a gravitational wave detector, this possibility was discussed in detail recently by Grote \cite{Grote}. The idea is to change the background field periodically with a frequency $\omega$, e.g. by rotating a permanent magnet. As long as $\omega$ is small in comparison to the frequency of the laser light used in the interferometer, our equations could still be used for this situation in the sense of an adiabatic approximation. If the laser light is polarized, rotating the background field would lead to a periodically varying signal according to any theory that predicts birefringence \textit{in vacuo}. (Unfortunately, this excludes the Born-Infeld theory.) By choosing long integration times --- Grote suggests to run the experiment for a year --- one could improve the statistics in such a way that it might be possible to reach the sensitivity for testing the Heisenberg-Euler theory. A similar analysis has not been carried through for the small-scale experiment so far. We will leave this for other authors, as it goes beyond the scope of the present paper which was to lay the theoretical foundations of the experiment in the context of an arbitrary $\mathcal{L} (F,G)$ theory. Finally, we add a remark on pulsed background fields. Pulsed magnetic fields and also laser pulses (pulsed null fields) can be produced with considerably higher field strengths than static or slowly varying fields. For example, pulsed magnetic fields of $\approx 100\,\mathrm{T}$ have already been produced in the laboratory. However, these fields persist only for short times, so the adiabatic approximation would not be valid which makes the theory considerably more difficult. Moreover, there are several technical obstacles. For example, we see major experimental difficulties towards a realization of the small-scale experiment with (pulsed) magnetic fields of $\approx 100 \, \text{T}$ because of magnetostriction. Also, for the experiment with a pulsed null field as a background one would wish to have the pulse traveling in the same direction as the laser beam in the interferometer, to make sure that the latter does not deviate from a straight line. This cannot be done without changing the geometry of the interferometer, neither for the small-scale nor for the large-scale experiment. For these reasons, we have restricted our specific calculations to time-independent background fields (which includes the case of slowly varying fields in the sense of an adiabatic approximation). \section*{Acknowledgments} G.S. wishes to thank Evangelisches Studienwerk Villigst for supporting him with a Ph.D. stipend during the course of this work. V.P. is grateful to Deut\-sche Forschungsgemeinschaft for financial support under Grant No. LA 905/14-1. Moreover, we gratefully acknowledge support from the Deutsche Forschungsgemeinschaft within the Research Training Group 1620 "Models of Gravity." We also thank Sven Herrmann for helpful discussions on the experimental aspects of the subject and an anonymous referee for directing our attention to some important references. \section*{Appendix: Hamiltonian formalism in terms of the excitation} First we give a necessary and sufficient condition for the constitutive law (\ref{eq:Hab}) to be locally solvable for $F^{ab}$. By the implicit function theorem, this is true if the Jacobian of the map from the field strength 6-vector to the excitation 6-vector is nonzero After dividing by the factor $\left(4 \mathcal{L}_F^2 +\mathcal{L}_G^2\right){}^2$, which is nonzero unless the Lagrangian is constant and thus trivial, we find that this condition reads \begin{gather}\label{eq:Jac} 4 \mathcal{L}_F^2 \! +\mathcal{L}_G^2- 4 \left(F^2 \! +4 G^2\right) \mathcal{L}_{FF} \mathcal{L}_{GG} +4 \left(F^2 \! +4 G^2\right) \mathcal{L}_{FG}^2 \nonumber \\ +8 F \mathcal{L}_F \mathcal{L}_{FF} +16 G \mathcal{L}_F \mathcal{L}_{FG} +4 F \mathcal{L}_{GG} \mathcal{L}_G -2 F \mathcal{L}_F \mathcal{L}_{GG} \nonumber \\ -8 G \mathcal{L}_{FF} \mathcal{L}_G +2 G \mathcal{L}_G \mathcal{L}_{GG} \neq0 \, . \end{gather} It is easy to see that this condition is satisfied, for all field configurations, in the Born theory and also in the Born-Infeld theory. For the Heisenberg-Euler Lagrangian (\ref{eq:HEL}) it is true as well, where we have to observe that this second-order theory is valid only as long as the magnitude of the field strength is small in comparison to $E_0$. Whenever the constitutive law (\ref{eq:const}) can be solved for $F_{mn}$, we can pass to a Hamiltonian description by a Legendre transformation (\ref{eq:Hamilton}). In this appendix we derive some relevant equations of the Hamiltonian formalism that will be used in the body of the paper, based on an analogue formalism that was developed already by Born and Infeld~\cite{BornInfeld1934} for their special theory. From (\ref{eq:Hamilton}) and (\ref{eq:const}) we find \begin{equation}\label{eq:constH} \dfrac{\partial \mathcal{H}}{\partial H^{ij}} = - F_{ij} \end{equation} which is the Hamiltonian version of the constitutive law. In the case of vanishing sources, $j^m = 0$, the Maxwell equations read \begin{equation}\label{eq:Lvac} \partial_nH^{mn}=0\quad\text{and}\quad\partial_{[a} F_{bc]}=0 \, . \end{equation} These two equations can be equivalently rewritten as \begin{equation}\label{eq:Hvac} \partial_{[a}\tilde H_{bc]}=0\quad\text{and}\quad\partial_n\tilde F{}^{mn}=0 \, . \end{equation} Comparison of (\ref{eq:const}) and (\ref{eq:Lvac}) on one side and (\ref{eq:constH}) and (\ref{eq:Hvac}) on the other side demonstrates that the source-free theory is invariant under a duality rotation \begin{equation}\label{eq:duality} F^{mn} \hookrightarrow \tilde H{}^{mn} \, , \quad \mathcal{L} \hookrightarrow \mathcal{H} \, . \end{equation} In 3-vector notation, $F^{mn} \hookrightarrow \tilde H{}^{mn}$ means $E_\alpha\hookrightarrow H_\alpha$ and $B_\alpha\hookrightarrow -D_\alpha$. Clearly, $F^{mn} \hookrightarrow \tilde H{}^{mn}$ implies \begin{equation}\label{DualRotTab} \begin{split} \tilde F{}^{mn}\hookrightarrow -H{}^{mn} \, , \quad F\hookrightarrow R \, , \quad G\hookrightarrow S\,. \end{split} \end{equation} If we start from the Lagrangian $\mathcal{L}(F_{mn})$ and work out all relevant equations of the theory in terms of the field strength, we get the relevant equations in terms of the excitation simply by applying the replacements (\ref{eq:duality}) and (\ref{DualRotTab}). Note that this method works only in the case of vanishing sources, $j^m=0$, but for any Lagrangian $\mathcal{L} (F_{mn})$ for which the constitutive law (\ref{eq:const}) can be solved for $F_{mn}$. We now specify to a Lagrangian of the Pleba{\'n}ski class. We recall that in this case the constitutive law reads \begin{equation}\label{HamiltonBez_1a} H^{ab}=-2\,\mathcal{L}_F\,F^{ab}+\mathcal{L}_G\,\tilde F^{ab}\, . \end{equation} Similarly, (\ref{eq:constH}) specifies to \begin{equation}\label{HamiltonBez_1b} F_{ab}=2\,\mathcal{H}_R\,H_{ab}-\mathcal{H}_S\,\tilde H_{ab} \, . \end{equation} Inserting (\ref{HamiltonBez_1a}) into (\ref{eq:Hamilton}) yields \begin{equation}\label{HamiltonBez_2a} \mathcal{H}(R,S)=2\mathcal{L}_F F +2 \mathcal{L}_G G-\mathcal{L}(F,G) \, \end{equation} while inserting (\ref{HamiltonBez_1b}) into (\ref{eq:Hamilton}) yields \begin{equation}\label{HamiltonBez_2b} \mathcal{H}(R,S)=2 \mathcal{H}_R R +2 \mathcal{H}_SS- \mathcal{L}(F,G) \, . \end{equation} From these two equations we read that \begin{equation}\label{HamiltonBez_1c} \mathcal{L}_FF+\mathcal{L}_GG=\mathcal{H}_RR+\mathcal{H}_SS \, . \end{equation} Also, from (\ref{HamiltonBez_1a}) we find immediately that \begin{equation}\label{HamiltonBez_2c} \begin{split} R=\left(-4\mathcal{L}_F^2+\mathcal{L}_G^2\right)^2F -8\mathcal{L}_F\mathcal{L}_GG \, , \\ S=\left(-4\mathcal{L}_F^2+\mathcal{L}_G^2\right)^2G +2\mathcal{L}_F\mathcal{L}_GF \, . \end{split} \end{equation} Similarly, from (\ref{HamiltonBez_1b}) we find that \begin{equation}\label{HamiltonBez_2d} \begin{split} F=\left(-4\mathcal{H}_R^2+\mathcal{H}_S^2\right)^2R- 8\mathcal{H}_R\mathcal{H}_SS\,\, , \\ G=\left(-4\mathcal{H}_R^2+\mathcal{H}_S^2\right)^2S +2\mathcal{H}_R\mathcal{H}_SR\,. \end{split} \end{equation} In Sec.~\ref{ArbLag} the equations (\ref{HamiltonBez_1a}) to (\ref{HamiltonBez_2d}) are used for calculating series expansions of the Lagrangian and the Hamiltonian theory up to second order in $F$ and $G$. This enables one to calculate the first post-Maxwellian results of the discussed experiment for an arbitrary Lagrangian of the Pleba{\'n}ski class.
{ "timestamp": "2015-07-16T02:10:46", "yymm": "1504", "arxiv_id": "1504.03159", "language": "en", "url": "https://arxiv.org/abs/1504.03159" }
\section{Introduction} Finding new examples of compact manifolds admitting Riemannian metrics of positive sectional curvature is one of the central problems in Riemannian geometry. In the homogeneous setting, the problem is to classify positively curved Riemannian homogeneous spaces, and this has been achieved in several classical works in this field; See \cite{Ber61}, \cite{Wallach1972}, \cite{AW75} and \cite{BB76}. Notice that in \cite{Ber61}, M. Berger missed one in his classification of positively curved normal homogeneous spaces, as pointed out by B. Wilking in \cite{Wi1999}. In the classification of odd dimensional positively curved Riemannian homogeneous spaces by L. B\'{e}rard Bergery \cite{BB76}, a gap was recently found by the first-named author of this paper and J.A. Wolf, and it has been corrected by B. Wilking; See \cite{XW2015}. Based on some methods developed in \cite{Wi2006}, B. Wilking and W. Ziller provided an alternative and modern proof of the classification in \cite{BB76} in their recent preprint \cite{WZ2015}. In homogeneous Finsler geometry, the following problem is of great significance: \begin{problem}\label{main-problem} Classify the smooth coset spaces $G/H$ admitting a $G$-invariant Finsler metric of positive flag curvature. \end{problem} For simplicity, we will call a homogeneous space {\it positively curved} when it admits an invariant Finsler metric of positive flag curvature, or if it has been endowed with such a metric. By the Bonnet-Myers Theorem for Finsler spaces, a positively curved homogeneous space must be compact. Problem \ref{main-problem} was first studied by S. Deng and Z. Hu in \cite{HD11}, where they classified homogeneous Randers metrics with positive flag curvature and vanishing S-curvature. Note that their classification is also valid for homogeneous $(\alpha,\beta)$-spaces with positive flag curvature and vanishing S-curvature \cite{XD2015}. Recently big progress has been made on the classification with more generality. In \cite{XD2014}, the authors of this paper classified positively curved normal homogeneous Finsler spaces, generalizing the classical results of \cite{Ber61}. In the joint work of the authors with L. Huang and Z. Hu \cite{XDHH2014}, we classified even dimensional positively curved homogeneous Finsler spaces, generalizing the results of \cite{Wallach1972}. It should be noted that a very useful flag curvature formula for homogeneous Finsler spaces has been established in \cite{XDHH2014} (see Theorem \ref{flag-curvature-formula-thm} below). In this paper, we will apply this formula to the classification of odd dimensional positive curved homogeneous Finsler spaces. The general theme for the classification has been set up in \cite{XD2014}. Recall that for a positively curved homogeneous Finsler space $(G/H,F)$ with a bi-invariant orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ for the compact Lie group $\mathfrak{g}$, and a fundamental Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ (i.e., $\mathfrak{t}\cap\mathfrak{h}$ is a Cartan subalgebra of $\mathfrak{h}$), we divide our discussion into the following three cases: \begin{description} \item{\bf Case I.} Each root plane of $\mathfrak{h}$ is a root plane of $\mathfrak{g}$. \item{\bf Case II.} There exist two roots $\alpha$ and $\beta$ of $\mathfrak{g}$ from different simple factors, such that $\mathrm{pr}_\mathfrak{h}(\alpha)=\mathrm{pr}_\mathfrak{h}(\beta)=\alpha'$ is a root of $\mathfrak{h}$. \item{\bf Case III.} There exists a linearly independent pair of roots $\alpha$ and $\beta$ of $\mathfrak{g}$ from the same simple factor, such that $\mathrm{pr}_\mathfrak{h}(\alpha)=\mathrm{pr}_\mathfrak{h}(\beta)=\alpha'$ is a root of $\mathfrak{h}$. \end{description} The classification is only up to local isometry. So we introduce the definition of equivalence (see Subsection 2.5) for coset spaces to specify some typical procedures which results local isometries, such as changing $G$ to its covering group, changing $H$ while fixing its identity component, cancelling common product factors from $G$ and $H$, replacing $H$ with $\sigma(H)$, where $\sigma$ is an isomorphism of $G$, and so on. This method greatly reduces the complexity of the statement and the proofs of the classification. In this paper we shall consider the classification of odd dimensional reversible homogeneous Finsler spaces with positive flag curvature. Our motivation to consider reversible metrics is twofold. On one hand, in our practical use of the flag curvature formula in Theorem \ref{flag-curvature-formula-thm}, we have found that the discussion will be much simpler under the assumption that the metric is reversible. On the other hand, restriction to reversible Finsler metrics will not lose much generality, and contains all the Riemannian ones. In particular, with some more detailed discussion according to the remark in Subsection 6.3, the classification result in this paper can cover that in the Riemannian case given by L. B\'{e}rard-Bergery. The main results of this paper are Theorems \ref{mainthm-part-1}, \ref{mainthm-part-2} and \ref{mainthm-part-3}, and these theorems can be summarized as the following \begin{theorem} \label{mainthm} Let $(G/H,F)$ be an odd dimensional positively curved reversible homogeneous Finsler space. Then we have the following: \begin{description} \item{\rm (1)} If it belongs to Case I, then up to equivalence, either $G$ is a compact simple Lie group or $G/H$ is one of the homogeneous spheres $S^{2n-1}=\mathrm{U}(n)/\mathrm{U}(n-1)$ and $S^{4n-1}=\mathrm{Sp}(n)\mathrm{U}(1)/\mathrm{Sp}(n-1)\mathrm{U}(1)$, $n>1$, or the $\mathrm{U}(3)$-homogeneous Aloff-Wallach's spaces. \item{\rm (2)} If it belongs to Case II, then up to equivalence, $G/H$ is one of the homogeneous spheres $S^3=\mathrm{SO}(4)/\mathrm{SO}(3)$ and $S^{4n-1}=\mathrm{Sp}(n)\mathrm{Sp}(1)/\mathrm{Sp}(n-1)\mathrm{Sp}(1), n>1$, or Wilking's space $\mathrm{SU}(3)\times\mathrm{SO}(3)/\mathrm{U}(2)$. \item{\rm (3)} If it belongs to Case III, then up to equivalence, $G/H$ is one of the homogeneous spheres $S^{2n-1}=\mathrm{SO}(2n)/\mathrm{SO}(2n-1), n>2$, $S^7=\mathrm{Spin}(7)/\mathrm{G}_2$, and $S^{15}=\mathrm{Spin}(9)/\mathrm{Spin}(7)$, or one of the Berger's spaces $\mathrm{SU}(5)/\mathrm{Sp}(2)\mathrm{U}(1)$ and $\mathrm{Sp}(2)/\mathrm{SU}(2)$. \end{description} \end{theorem} Notice that any invariant Finsler metric on the coset space $S^{2n-1}=\mathrm{SO}(2n)/\mathrm{SO}(2n-1)$ or $S^7=\mathrm{Spin}(7)/\mathrm{G}_2$ must be the standard Riemannian metric of positive constant curvature. On the other hand, as pointed out in \cite{HD11} and \cite{XD2015}, the Aloff-Wallach's spaces admit non-Riemannian homogeneous Randers metrics or $(\alpha,\beta)$-metrics with positive flag curvature and vanishing S-curvature. Moreover, any of the other coset spaces listed in Theorem \ref{mainthm} admits a non-Riemannian positively curved normal homogeneous Finsler metric; see \cite{XD2014}. Though it is not clearly stated in these literatures, the reversibility can be easily fulfilled for each of the non-Riemannian cases. Theorems \ref{mainthm-part-1}, \ref{mainthm-part-2} and \ref{mainthm-part-3} will be proved separately in the sequel. These results cover most classification results in \cite{BB76}. However, the classification of this paper is not complete for Case I when $G$ is compact simple, where the homogeneous spheres $\mathrm{SU}(n)/\mathrm{SU}(n-1)$, $\mathrm{Sp}(n)/\mathrm{Sp}(n-1)$, and the $\mathrm{SU}(3)$-homogeneous Aloff-Wallach's spaces, as well as some other possible candidates, are known to have invariant positively curved Finsler metrics (\cite{HD11}). One reason that our classification cannot be perfect in this case is as the following. The method in this paper originates from the most traditional algebraic one, namely, to prove a homogeneous space cannot be positively curved, we try to find a linearly independent and commutative pair $u$ and $v$ from $\mathfrak{m}$ such that the sectional (or flag) curvature for the plane spanned by them (the flag pole needs to be specified for the flag curvature) vanishes. But this method is not valid for some rare cases in Case I; see \cite{XW2015}. In Section 2, we give a brief summary of basic notions in Finsler geometry and homogeneous Finsler geometry, and define the notion of equivalence which will be used throughout this paper. In Section 3, we present the general theme for the classification of odd dimensional positively curved homogeneous Finsler spaces, including the flag curvature formula, the rank equality, and some useful lemmas. In Sections 4 and 5, we discuss the classification of odd dimensional positively curved reversible homogeneous Finsler spaces in Case III. In Section 6, we discuss the classification of odd dimensional positively curved reversible homogeneous Finsler spaces in Case II and I. Section 7 is an appendix where we summarize the presentation of root systems of compact simple Lie algebras used in this paper. We are grateful to J.A. Wolf, W. Ziller, B. Wilking and L. Huang for helpful discussions. The first author thanks the Department of Mathematics at the University of California, Berkeley, for hospitality during the preparation of this paper. \section{Preliminaries} In this section, we summarize some definitions and fundamental results in Finsler geometry; See \cite{CS04} and \cite{DE12} for more details. In this paper, we will only consider connected smooth manifolds and connected Lie groups. \subsection{Minkowski norm and Finsler metric} A {\it Minkowski norm} on a real vector space $\mathbf{V}$, $\dim\mathbf{V}=n$, is a continuous real-valued function $F:\mathbf{V}\rightarrow[0,+\infty)$ satisfying the following conditions: \begin{description} \item{\rm (1)}\quad $F$ is positive and smooth on $\mathbf{V}\backslash\{0\}$; \item{\rm (2)}\quad $F(\lambda y)=\lambda F(y)$ for any $\lambda >0$; \item{\rm (3)}\quad With respect to any linear coordinates $y=y^i e_i$, the Hessian matrix \begin{equation} (g_{ij}(y))=\left(\frac12[F^2]_{y^i y^j}\right) \end{equation} is positive definite at any nonzero $y$. \end{description} The Hessian matrix $(g_{ij}(y))$ and its inverse $(g^{ij}(y))$ can be used to move up and down indices of relevant tensors in Finsler geometry. Given a nonzero vector $y$, the Hessian matrix $(g_{ij}(y))$ defines an inner product $\langle\cdot,\cdot\rangle_y$ on $\mathbf{V}$ by \begin{equation*} \langle u,v\rangle_y=g_{ij}(y)u^i v^j, \end{equation*} where $u=u^i e_i$ and $v=v^i e_i$. In the literature, the above inner product is also denoted as $\langle\cdot,\cdot\rangle_y^F$ to specify the norm. Sometimes it is shortened as $g_y$ or $g_y^F$. It is obvious that the inner product can also be expressed as \begin{equation} \langle u,v\rangle_y=\frac12\frac{\partial^2}{\partial s\partial t} [F^2(y+su+tv)]|_{s=t=0}. \end{equation} It is easy to check that the above definition is independent of the choice of linear coordinates. Let $M$ be a smooth manifold with dimension $n$. A {\it Finsler metric} $F$ on $M$ is a continuous function $F:TM\rightarrow[0,+\infty)$ which is positive and smooth on the slit tangent bundle $TM\backslash 0$, and whose restriction to each tangent space is a Minkowski norm. Generally, $(M,F)$ is called a {\it Finsler manifold} or a {\it Finsler space}. Here are some important examples. Riemannian metrics are a special class of Finsler metrics such that the Hessian matrix only depend on $x\in M$. For a Riemannian manifold, the metric is often referred to as the global smooth section $g_{ij}dx^i dx^j$ of $\mathrm{Sym}^2(T^* M)$. Unless otherwise stated, we mainly deal with non-Riemannian metrics in this paper. Randers metrics are the simplest and the most important class of non-Riemannian metrics in Finsler geometry. A Randers metric can be written as $F=\alpha+\beta$, where $\alpha$ is a Riemannian metric and $\beta$ is a 1-form. The notion of Randers metrics can be naturally generalized to $(\alpha,\beta)$-metrics. An $(\alpha,\beta)$-metric is a Finsler metric of the form $F=\alpha\phi(\beta/\alpha)$, where $\phi$ is a positive smooth real function, $\alpha$ is a Riemannian metric and $\beta$ is a 1-form. In recent years, there have been a lot of research works concerning $(\alpha,\beta)$-metrics as well as Randers metrics. Recently, we have defined and studied $(\alpha_1,\alpha_2)$-metrics and introduced the more generalized class of $(\alpha_1,\alpha_2,\ldots,\alpha_k)$-metrics; see \cite{DX2014} and \cite{XDHH2014}. Such metrics naturally appear in the study of homogeneous Finsler geometry. A Minkowski norm or a Finsler metric is called {\it reversible} if $F(y)=F(-y)$ for any $y\in\mathbf{V}$ or $F(x,y)=F(x,-y)$ for any $x\in M$ and $y\in T_xM$. Obviously, a Riemannian metric is reversible, and a non-Riemannian Randers metric must be non-reversible. Note that a non-Riemannian $(\alpha,\beta)$-metric is reversible if the function $\phi$ is an even function, and there exist many non-reversible $(\alpha,\beta)$-metrics. \subsection{Geodesic spray and geodesic} Let $(M,F)$ be a Finsler space. A local coordinate system $\{ x=(x^i)\in M; y=y^j\partial_{x^j}\in T_x M\}$ on $TM$ is called a {\it standard local coordinates system}. The geodesic spray is a vector field $\mathbf{G}$ globally defined on $TM\backslash 0$. On a standard local coordinate system, it can be expressed as \begin{equation} \mathbf{G}=y^i\partial_{x^i}-2\mathbf{G}^i\partial_{y^i}, \end{equation} in which \begin{equation} \mathbf{G}^i=\frac14 g^{il}([F^2]_{x^k y^l}y^k-[F^2]_{x^l}). \end{equation} A non-constant curve $c(t)$ on $M$ is called a geodesic if $(c(t),\dot{c}(t))$ is an integration curve of ${G}$, in which the tangent field $\dot{c}(t)=\frac{d}{dt}c(t)$ along the curve gives the speed. On a standard local coordinate, a geodesic $c(t)=(c^i(t))$ can be characterized by the equations \begin{equation} \ddot{c}^i(t)+2\mathbf{G}^i(c(t),\dot{c}(t))=0. \end{equation} It is well known that $F(c(t),\dot{c(t)})$ is a constant function, or in other words, a geodesic defined by the above equations must be of nonzero constant speed. \subsection{Riemann curvature and flag curvature} In Finsler geometry, there is a similar notion of curvature as in the Riemannian case, which is called the Riemann curvature. It can be defined either by the Jacobi field or the structure equation for the curvature of the Chern connection. On a standard local coordinate system, the Riemann curvature is a linear map ${R}_y=R_k^i(y)\partial_{x^i}\otimes dx^k: T_x M\rightarrow T_xM$, defined by \begin{equation}\label{local-coordinate-formula-rieman-curvature} R_k^i(y)=2\partial_{x^k}{G}^i-y^j\partial^2_{x^j y^k}{G}^i +2{G}^j\partial_{y^j y^k}^2{G}^i-\partial_{y^j}{G}^i \partial_{y^k}{G}^j. \end{equation} When the metric needs to be specified, the Riemann curvature is denoted as ${R^F}_y=({R^F})_k^i(y)\partial_{x^i}\otimes dx^k$. From Proposition 6.2.2 of \cite{Sh2001}, it is easily seen that the Riemann curvature $R_y$ is self-adjoint with respect to the inner product $\langle\cdot,\cdot\rangle_y$. Using the Riemann curvature, we can generalize the notion of sectional curvature to Finsler geometry, called the flag curvature. Let $y\in T_xM$ be a nonzero tangent vector and $\mathbf{P}$ a tangent plane in $T_xM$ containing $y$, and suppose it is linearly spanned by $y$ and $v$. Then the flag curvature of the pair $(y,\mathbf{P})$ is defined by \begin{equation}\label{def-flag-curv} K(x,y,y\wedge v)=K(x,y,\mathbf{P})= \frac{\langle R_y v,v\rangle_y}{\langle y,y\rangle_y\langle v,v\rangle_y -\langle y,v\rangle_y^2}. \end{equation} Obviously, the flag curvature in (\ref{def-flag-curv}) does not depend on the choice of $v$ but only on $y$ and $\mathbf{P}$. Sometimes we also write the flag curvature of a Finsler metric $F$ as $K^F(x,y,y\wedge v)$ or $K^F(x,y,\mathbf{P})$ to indicate the metric explicitly. \subsection{Totally geodesic submanifold} A submanifold $N$ of a Finsler space $(M,F)$ can be naturally endowed with a submanifold Finsler metric, denoted as $F|_N$. At each point $x\in N$, the Minkowski norm $F|_N(x,\cdot)$ is just the restriction of the Minkowski norm $F(x,\cdot)$ to $T_x N$. We say that $(N,F|_N)$ is a {\it Finsler submanifold} or a {\it Finsler subspace}. A Finsler subspace $(N,F|_N)$ of $(M,F)$ is called {\it totally geodesic} if any geodesic of $(N,F|_N)$ is also a geodesic of $(M,F)$. On a standard local coordinate system $(x^i,y^j)$ such that $N$ is locally defined by $x^{k+1}=\cdots=x^n=0$, the totally geodesic condition can be expressed as \begin{equation*} \mathbf{G}^i(x,y)=0, \quad k<i\leq n, x\in N, y\in T_x N. \end{equation*} A direct calculation shows that in this case the Riemann curvature $R_y^{F|_N}:T_x N\rightarrow T_x N$ of $(N,F|_N)$ is just the restriction of the Riemann curvature $R_y^F$ of $(M,F)$, where $y$ is a nonzero tangent vector of $N$ at $x\in N$. Therefore we have \begin{proposition}\label{prop-2-2} Let $(N,F|_N)$ be a totally geodesic submanifold of $(M,F)$. Then for any $x\in N$, $y\in T_x N\backslash 0$, and a tangent plane $\mathbf{P}\subset T_x N$ containing $y$, we have \begin{equation} K^{F|_N}(x,y,\mathbf{P})=K^F(x,y,\mathbf{P}). \end{equation} \end{proposition} As in Riemannian geometry, the local properties of exponential maps implies any connected component $N$ of the common fixed points for a set of isometries $\{\rho_a,a\in\mathcal{A}\}$ of $(M,F)$ is a totally geodesic sub-manifolds of $(M,F)$. To be more precise, for each point $x\in N$, $$T_x N=\{y\in T_x M|{\rho_a}_*y=y,\forall a\in \mathcal{A}\}$$ and $N$ contains a small neighborhood of $x$ in $\exp_x T_x N$. \subsection{Homogeneous Finsler geometry} Let $(M,F)$ be a connected Finsler manifold. If the full group $I(M,F)$ of isometries of $(M, F)$ (or equivalently, the identity component $I_0(M,F)$ of $I(M,F)$) acts transitively on $M$, then we say that $(M, F)$ is a {\it homogeneous Finsler space}, or $F$ is a {\it homogeneous Finsler metric}. It is shown in \cite{DH2004} that $(I(M,F)$ (hence $G=I_0(M,F)$) is a Lie transformation group on $M$. Let $H$ be the compact isotropic subgroup of $G$ at a point $o\in M$. Then $M$ is diffeomorphic to the smooth coset space $G/H$, associated with a canonical smooth projection map $\pi:G\rightarrow M=G/H$ such that $\pi(e)=o$. The tangent space $T_o M$ can be naturally identified with $\mathfrak{m}=\mathfrak{g}/\mathfrak{h}$, in which $\mathfrak{g}$ and $\mathfrak{h}$ are the Lie algebras of $G$ and $H$, respectively. The isotropy action of $H$ on $T_oM$ coincides with the induced $\mathrm{Ad}(H)$-action on $\mathfrak{m}$. In the cases we will consider in this paper, $\mathfrak{m}$ can be realized as a complement subspace of $\mathfrak{h}$ in $\mathfrak{g}$ which is preserved by $\mathrm{Ad}(H)$-actions. Then we have an {\it $\mathrm{Ad}(H)$-invariant decomposition} $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ satisfying the reductive condition $[\mathfrak{h},\mathfrak{m}]\subset\mathfrak{m}$. If $(M,F)$ is positively curved, then by the Bonnet-Myers Theorem, $M$ must be compact, hence $G=I_0(M,F)$ is also compact. Fix a bi-invariant inner product on $\mathfrak{g}$. Then we can realize $\mathfrak{m}$ as the bi-invariant orthogonal complement of $\mathfrak{h}$. In this case, the decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ is called a {\it bi-invariant orthogonal decomposition} for the homogeneous space $G/H$. Notice that for any closed connected subgroup $G$ of $I_0(M,F)$ which acts transitively on $M$, we have a corresponding representation $M=G/H$. The most typical example is the nine classes of homogeneous spheres; See \cite{Bo1940}. For the convenience, we will consider a slightly more general situation, namely, for a positively curved homogeneous Finsler space $M=G/H$, we only require that the Lie algebra $\mathfrak{g}$ of $G$ is compact (i.e., $G$ is quasi-compact). The notion of bi-invariant orthogonal decomposition is still valid in this case. To simplify the discussion and avoid unnecessary iterance in the classification, we will not distinguish homogeneous Finsler spaces which are locally isometric to each other. In particular, we will call $(G_1/H_1,F_1)$ and $(G_2/H_2,F_2)$ (with corresponding bi-invariant orthogonal decompositions for the compact Lie groups $\mathfrak{g}_1$ and $\mathfrak{g}_2$ respectively) {\it equivalent} if one of the following conditions is satisfied \begin{description} \item{\rm (1)}\quad $G_1$ is a covering group of $G_2$, $H_1$ has the same identity component as $H_2$, and $F_1$ is naturally induced from $F_2$, up to a positive scalar; \item{\rm (2)}\quad $G_1=G_2\times G'$, $H_1=H_2\times G'$, and $F_1$ and $F_2$ are induced from the same Minkowski norm, when $\mathfrak{m}_1$ and $\mathfrak{m}_2$ are naturally identified as the same vector space; \item{\rm (3)}\quad There exists a group isomorphism from $G_1$ to $G_2$, which maps $H_1$ onto $H_2$ and induces an isometry from $F_1$ to $F_2$. \end{description} The above notion actually defines an {\it equivalent relation} on the set of compact homogeneous Finsler spaces $G/H$ with $\mathfrak{g}=\mathrm{Lie}(G)$ compact. In the following, compact homogeneous Finsler spaces in the same equivalent class will not be distinguished. Thus our classification will be local, or in other words, on the Lie algebra level. \section{The general theme for the classification} In this section, we establish the theme for our classification. \subsection{A flag curvature formula for homogeneous Finsler spaces} In \cite{XDHH2014}, we proved the following theorem: \begin{theorem} \label{flag-curvature-formula-thm} Let $(G/H,F)$ be a connected homogeneous Finsler space, and $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ be an $\mathrm{Ad}(H)$-invariant decomposition for $G/H$. Then for any linearly independent commutative pair $u$ and $v$ in $\mathfrak{m}$ satisfying $ \langle[u,\mathfrak{m}],u\rangle^F_u=0 $, we have \begin{equation*} K^F(o,u,u\wedge v)=\frac{\langle U(u,v),U(u,v)\rangle_u^F} {\langle u,u\rangle_u^F \langle v,v\rangle_u^F- {\langle u,v\rangle_u^F}\langle u,v\rangle_u^F}, \end{equation*} where $U$ is a bilinear map from $\mathfrak{m}\times \mathfrak{m}$ to $\mathfrak{m}$ defined by \begin{equation*} \langle U(u,v),w\rangle_u^F=\frac{1}{2}(\langle[w,u]_\mathfrak{m},v\rangle_u^F +\langle[w,v]_\mathfrak{m},u\rangle_u^F), \mbox{ for any }w\in\mathfrak{m}, \end{equation*} here $[\cdot,\cdot]_\mathfrak{m}=\mathrm{pr}_\mathfrak{m}\circ[\cdot,\cdot]$ and $\mathrm{pr}_\mathfrak{m}$ is the projection with respect to the given $\mathrm{Ad}(H)$-invariant decomposition. \end{theorem} The flag curvature formula in Theorem \ref{flag-curvature-formula-thm} only deals with some special flags spanned by commutative pairs, but it is very convenient in this paper. In \cite{XDHH2014}, we have provided a proof of this theorem by the submersion technique. To make this work more self-contained, we quote here another shorter proof by L. Huang, which can also be found in \cite{XDHH2014}. In \cite{Huang2013}, L.~Huang has obtain a general flag curvature formula of homogeneous Finsler spaces, using the technique of invariant frames. To introduce his formula, we first define the {\it spray vector field} $\eta:\mathfrak{m}\backslash\{0\}\rightarrow\mathfrak{m}$ and the {\it connection operator} $N:(\mathfrak{m}\backslash\{0\})\times\mathfrak{m}\rightarrow\mathfrak{m}$. For any $u\in\mathfrak{m}\backslash\{0\}$, $\eta(u)$ is defined by \begin{equation*} \langle\eta(u), w\rangle_u^F=\langle u,[w,u]_{\mathfrak{m}}\rangle_u^F, \quad \forall w\in\mathfrak{m}, \end{equation*} and $N(u,\cdot)$ is a linear operator on $\mathfrak{m}$ determined by \begin{equation*} \begin{aligned} 2\langle N(u,w_1),w_2\rangle_u^F=\langle [w_2,w_1]_{\mathfrak{m}}, u\rangle_u^F & + \langle [w_2,u]_{\mathfrak{m}}, w_1\rangle_u^F+\langle [w_1,u]_{\mathfrak{m}},w_2\rangle_u^F\\ & - 2C^F_u(w_1,w_2,\eta(u)),\quad \forall w_1,w_2\in\mathfrak{m}. \end{aligned} \end{equation*} Using these two notions, L.~Huang proved the following formula for Riemann curvature $R_u: T_o(G/H)\rightarrow T_o(G/H)$, \begin{equation}\label{6000} \langle R_u(w),w\rangle_u^F =\langle [[w,u]_{\mathfrak{h}},w],u\rangle_u^F +\langle \tilde{R}(u)w,w\rangle_u^F,\quad \forall w\in\mathfrak{m}, \end{equation} where the linear operator $\tilde{R}(u):\mathfrak{m}\rightarrow\mathfrak{m}$ is given by $$\tilde{R}(u)w=D_{\eta(u)}N(u,w)-N(u,N(u,w))+ N(u,[u,w]_\mathfrak{m})-[u,N(u,w)]_\mathfrak{m},$$ here $D_{\eta(u)}N(u,w)$ is the derivative of $N(\cdot,w)$ at $u\in\mathfrak{m}\backslash\{0\}$ in the direction of $\eta(u)$. In particular, if $\eta(u)=0$, then $D_{\eta(u)}N(u,w)=0$. Now suppose $u\in\mathfrak{m}\backslash\{0\}$ satisfies $\langle[u,\mathfrak{m}],u\rangle_u^F=0$, i.e., $\eta(u)=0$. Then for any $v\in\mathfrak{m}$ commutative with $u$, we have $N(u,v)=U(u,v)$. Thus \begin{eqnarray*} \langle R_u (v),v\rangle_u^F &=& -\langle N(u,N(u,v)),v\rangle_u^F -\langle [u,N(u,v)],v\rangle_u^F\\ &=& -\frac12(\langle [v,N(u,v)]_\mathfrak{m},u\rangle_u^F+ \langle[N(u,v),u]_\mathfrak{m},v\rangle_u^F) +\langle [N(u,v),u],v\rangle_u^F\\ &=& \frac12(\langle[N(u,v),v],u\rangle_u^F+ \langle[N(u,v),u],v\rangle_u^F)\\ &=&\langle U(u,v),N(u,v)\rangle_u^F=\langle U(u,v),U(u,v)\rangle_u^F. \end{eqnarray*} From this the flag curvature formula in Theorem \ref{flag-curvature-formula-thm} follows immediately. \subsection{The totally geodesic technique and the rank equality} Assume that $(G/H,F)$ is a positively curved homogeneous Finsler space, with a bi-invariant orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ for the compact Lie group $\mathfrak{g}$. Let $\mathfrak{t}$ be a Cartan subalgebra of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{h}$ is a Cartan subalgebra of $\mathfrak{h}$. For simplicity, we just call $\mathfrak{t}$ a {\it fundamental Cartan subalgebra}. Fix a subalgebra $\mathfrak{t}'$ of $\mathfrak{t}\cap\mathfrak{h}$, and denote the identity component of $C_G(\mathfrak{t}')$ as $G'$. Let $H'=G'\cap H$. Then $(G'/H',F|_{G'/H'})$ is a homogeneous submanifold of $(G/H,F)$. We first prove the following useful lemma. \begin{lemma} \label{totally-geodesic-lemma} Keep all the above notation. Then $(G'/G'\cap H, F|_{G'/H'})$ is totally geodesic in $(G/H,F)$. In particular, if $G/H$ admits positively curved homogeneous Finsler metrics and $\dim G'/H'>1$, then $G'/H'$ also admits positively curved homogeneous Finsler metrics. \end{lemma} \begin{proof} We will present two proofs of the theorem. The first proof uses Corollary II.5.7 of \cite{Br}, which asserts that the set of common fixed points of $T'$ is a disconnected union of finite orbits of $N_G(T')=\{g\in G|g^{-1}T'g=T'\}$. Thus the connected component of $N_G(T')\cdot o$ containing $o=eH$, which coincides with $G'/H'$, is a totally geodesic submanifold of $(G/H,F)$. Therefore, if $(G/H,F)$ is positively curved and $\dim G'/H' >1$, then the homogeneous Finsler space $(G'/H',F|_{G'/H'})$ has positive flag curvature. The second proof will be completed through a direct calculation, using the geodesic spray formula for homogeneous Finsler spaces in \cite{DX2014-2}. Note that the Lie algebra $\mathfrak{g}'$ of $G'$ is the centralizer $C_{\mathfrak{g}}(\mathfrak{t}')$ of $\mathfrak{t}'$ in $\mathfrak{g}$. Since $\mathfrak{t}'\subset\mathfrak{h}$, we also have the decomposition $\mathfrak{g}'=(\mathfrak{g}'\cap\mathfrak{h})+(\mathfrak{g}'\cap\mathfrak{m})$, where $\mathfrak{g}'\cap\mathfrak{h}=C_\mathfrak{h}(\mathfrak{t}')$ is the Lie algebra of $H'$. Since the bi-invariant orthogonal complement ${\mathfrak{g}'}^{\perp}$ is equal to $[\mathfrak{t}',\mathfrak{g}]$, we also have $$[\mathfrak{t}',\mathfrak{g}]=[\mathfrak{t}',\mathfrak{h}]+ [\mathfrak{t}',\mathfrak{m}]=({\mathfrak{g}'}^{\perp}\cap\mathfrak{h}) +({\mathfrak{g}'}^{\perp}\cap\mathfrak{m}).$$ Let $v_1$, $\ldots$, $v_m$, $v_{m+1}$, $\ldots$, $v_{m+n}$ be an orthogonal basis of $\mathfrak{m}$ with respect to the bi-invariant inner product, such that $v_i\in \mathfrak{g}'\cap\mathfrak{m}$, for $1\leq i\leq m$, and denote the Killing vector field on $M=G/H$ generated by $v_j$ as $X_j$, for $1\leq j\leq m+n$. Then the the restriction of $X_i$ to $M'$ is a Killing vector fields of $(M',F|_{M'})$, $1\leq i\leq m$. It is easily seen that there is an neighborhood $U$ of the origin $o$, such that $y=y^iX_i$ defines a linear coordinate system for $y\in TU$. We now consider the geodesic spray $\mathbf{G}(o,y)$ of $(M,F)$ at $o$ when $y$ lies in the linear span of $v_1$, $\ldots$, $v_m$. In \cite{XD2014-2}, we have proven that \begin{equation}\label{geodesic-formula} \mathbf{G}(o,y)=y^i\tilde{X}_i+ g^{il}c^k_{jl}g_{kh}y^h y^j\partial_{y^i}, \end{equation} where $\tilde{X}_i$ is the tangent vector field on $T(TM\backslash\{0\})$ naturally induced by $X_i$, and the coefficients $c^k_{ij}$ are defined by $[v_i,v_j]_\mathfrak{m}=c^k_{ij}v_k$. Since $[C_{\mathfrak{g}}(\mathfrak{t}'),[\mathfrak{t}',\mathfrak{g}]] \subset[\mathfrak{t}',\mathfrak{g}]$, we have $[\mathfrak{g}'\cap\mathfrak{m},[\mathfrak{t}',\mathfrak{g}]\cap\mathfrak{m}]_\mathfrak{m} \subset[\mathfrak{t}',\mathfrak{g}]\cap\mathfrak{m}$, hence $c^k_{ij}=0$, for $i\leq m$, $j>m$ and $k\leq m$. On the other hand, since $F$ is $\mathrm{Ad}(H)$-invariant, by \cite{DH2004}, we have $$\langle [h,v],w\rangle_y^{F}+\langle v,[h,w]\rangle_y^F= -2C_u([h,y],v,w),\quad\forall h\in\mathfrak{h},v\in\mathfrak{g}'\cap\mathfrak{m}, w\in\mathfrak{m}.$$ In particular, for $h\in\mathfrak{t}'$, we have $[h,v]=[h,y]=0$. Then we have \begin{equation}\label{yk} \langle \mathfrak{g}'\cap\mathfrak{m},[\mathfrak{t}', \mathfrak{m}]\rangle_u^F= \langle \mathfrak{g}'\cap\mathfrak{m},[\mathfrak{t}', \mathfrak{g}]\cap\mathfrak{m}\rangle_u^F=0. \end{equation} We now suppose that $y^k=0$ for any $k>m$. Then (\ref{yk}) implies that $g^{ij}=g_{ij}=0$ for $1\leq i\leq m<j\leq m+n$. Hence in this case, the only nonzero terms in the right side of (\ref{geodesic-formula}) are $y^i X_i$ with $i\leq m$, and $g^{il}c^k_{jl}g_{kh}y^h y^j\partial_{y^i}$ with $i,j,k,h,l\leq m$. Consequently in this case $\mathbf{G}(o,y)$ is equal to the geodesic spray of $(G'/H',F|_{G'/H'})$ at $(o,y)$. By the homogeneity, the above assertion is valid for any $g\in G'$. Therefore $(G'/H',F|_{G'/H'})$ is totally geodesic in $(G/H,F)$. \end{proof} From the first proof, we see that the lemma is still valid with $T'$ changed to other subgroups in $H$. This will be convenient when it is difficult to calculate directly with $G'$, but up to equivalence, $G'$ and $H'$ contains a common product factor $T'$ which can be cancelled. We now give an immediate application of Lemma \ref{totally-geodesic-lemma}. Assume that $\mathfrak{t}'=\mathfrak{t}\cap\mathfrak{h}$, and $F'=F|_{G'/H'}$ induces a left invariant Finsler metric $F''$ on the compact Lie group $G''$ with $\mathrm{Lie}(G'')= C_\mathfrak{g}(\mathfrak{t}\cap\mathfrak{h})\cap\mathfrak{m}$. Then the above lemma implies that if $\dim G''>1$, then $F''$ is positively curved. Thus by Theorem 5.1 of \cite{DH2013}, we have $G''=\mathrm{U}(1)$, $\mathrm{SU}(2)$ or $\mathrm{SO}(3)$. This proves the following rank equality, which is a special case of Theorem 5.2 in \cite{XDHH2014}. \begin{corollary} \label{rank-equality-corollary} Let $(G/H,F)$ be an odd dimensional positively curved homogeneous Finsler space with compact $\mathfrak{g}=\mathrm{Lie}(G)$. Then $\mathrm{rk}\mathfrak{g}=\mathrm{rk}\mathfrak{h}+1$. \end{corollary} \subsection{Some notation for Lie algebras and root systems} We now set some notation for the relevant Lie algebras and root systems. Let $(G/H,F)$ be an odd dimensional positively curved homogeneous Finsler space with a bi-invariant orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ for the compact Lie algebra $\mathfrak{g}=\mathrm{Lie}(G)$. The orthogonal projections to the $\mathfrak{h}$-factor and $\mathfrak{m}$-factor are denoted as $\mathrm{pr}_\mathfrak{h}$ and $\mathrm{pr}_\mathfrak{m}$, respectively. Fix a fundamental Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ (i.e., $\mathfrak{t}\cap\mathfrak{h}$ is a Cartan subalgebra of $\mathfrak{h}$). From now on, root systems, root planes, etc, for $\mathfrak{g}$ will be taken with respect to $\mathfrak{t}$, and those for $\mathfrak{h}$ will be taken with respect to $\mathfrak{t}\cap\mathfrak{h}$. It is easy to see that $\mathfrak{t}$ is a splitting Cartan subalgebra, that is, $$\mathfrak{t}=(\mathfrak{t}\cap\mathfrak{h})+(\mathfrak{t}\cap\mathfrak{m}).$$ By Corollary \ref{rank-equality-corollary}, we have $\dim (\mathfrak{t}\cap\mathfrak{m})=1$. The maximal torus of $G$ (resp. $H$) corresponding to $\mathfrak{t}$ (resp. $\mathfrak{t}\cap\mathfrak{h}$) will be denoted as $T$ (resp. $T_H$). We now have the following decomposition of $\mathfrak{g}$ with respect to $\mathrm{Ad}(T)$-actions: \begin{equation} \mathfrak{g}=\mathfrak{t}+\sum_{\alpha\in\Delta_{\mathfrak{g}}}\mathfrak{g}_{\pm\alpha}, \end{equation} where $\Delta_\mathfrak{g}\subset\mathfrak{t}$ is the root system of $\mathfrak{g}$. For $\alpha\in\Delta_\mathfrak{g}$, $\mathfrak{g}_{\pm\alpha}$ is a two dimensional irreducible representation of $\mathrm{Ad}(T)$-actions, called a {\it root plane}. Through the bi-invariant inner product, we will regard a root as a vector in $\mathfrak{t}$ rather than a vector in $\mathfrak{t}^*$. For the compact Lie algebra $\mathfrak{h}=\mathrm{Lie}(H)$, we have a similar decomposition with respect to $\mathrm{Ad}(T_H)$-actions. On the other hand, root planes of $\mathfrak{h}$ are denoted as $\mathfrak{h}_{\pm\alpha'}$, where $\alpha' \in\mathfrak{t}\cap\mathfrak{h}$ are roots of $\mathfrak{h}$ in the root system $\Delta_\mathfrak{h}\subset\mathfrak{t}\cap\mathfrak{h}$. There is another decomposition of $\mathfrak{g}$ with respect to the $\mathrm{Ad}(T_H)$-action, namely, \begin{equation}\label{decomposition-compact-lie-alg-2} \mathfrak{g}=\sum_{\alpha'\in\mathfrak{t}\cap\mathfrak{h}} \hat{\mathfrak{g}}_{\pm\alpha'}, \end{equation} where \begin{equation*} \hat{\mathfrak{g}}_{\pm\alpha'}=\sum_{\mathrm{pr}_\mathfrak{h}(\alpha)=\alpha'} \mathfrak{g}_{\pm\alpha},\quad \mbox{if}\,\, \alpha'\neq 0, \end{equation*} $\hat{\mathfrak{g}}_0=\mathfrak{t}+\mathfrak{g}_{\pm\alpha}$, if there is a root $\alpha$ of $\mathfrak{g}$ contained in $\mathfrak{t}\cap\mathfrak{m}$, and $\hat{\mathfrak{g}}_0=\mathfrak{t}\cap\mathfrak{m}$ otherwise. This {\it $\mathrm{Ad}(T_H)$-invariant decomposition} is compatible with the bi-invariant orthogonal decomposition in the sense that $$\hat{\mathfrak{g}}_{\pm\alpha'}=(\hat{\mathfrak{g}}_{\pm\alpha'}\cap\mathfrak{h}) +(\hat{\mathfrak{g}}_{\pm\alpha'}\cap\mathfrak{m}).$$ To be more precise, we have the following easy lemma, which will be repeatedly used in the sequel. \begin{lemma} Let $\alpha'$ be a vector of $\mathfrak{t}\cap\mathfrak{h}$. Then we have the following: \begin{description} \item{\rm (1)} if $\alpha'\in\Delta_\mathfrak{h}$, then we have $\hat{\mathfrak{g}}_{\pm\alpha'}=(\hat{\mathfrak{g}}_{\pm\alpha'} \cap\mathfrak{h})+ (\hat{\mathfrak{g}}_{\pm\alpha'}\cap\mathfrak{m}),$ where $\hat{\mathfrak{g}}_{\pm\alpha'} \cap\mathfrak{h}=\mathfrak{h}_{\pm\alpha'}$; \item{\rm (2)} if $\alpha'\notin\Delta_\mathfrak{h}$, then we have $\mathfrak{g}_{\pm\alpha'}\subset\mathfrak{m}$. In particular, $\hat{\mathfrak{g}_0}\subset\mathfrak{m}$, and $\mathfrak{g}_{\pm\alpha}\subset\mathfrak{m}$, if $\mathrm{pr}_\mathfrak{h}\alpha\notin\Delta_\mathfrak{h}$. \end{description} \end{lemma} For the bracket relation between root planes, we have the following well known formula: \begin{equation} [\mathfrak{g}_{\pm\alpha},\mathfrak{g}_{\pm\beta}]\subseteq \mathfrak{g}_{\pm(\alpha+\beta)}+\mathfrak{g}_{\pm(\alpha-\beta)}, \end{equation} where $\mathfrak{g}_{\pm\alpha}$ and $\mathfrak{g}_{\pm\beta}$ are different root planes, i.e., $\alpha\neq\pm\beta$, and each term of the right side can be $0$ when the corresponding vector is not a root of $\mathfrak{g}$. In fact, this is just a special case of the following \begin{lemma}Keep all the above notation. We have \begin{description}\label{trick-lemma-0} \item{\rm (1)} For any root $\alpha$ of $\mathfrak{g}$, $[\mathfrak{g}_{\pm\alpha}, \mathfrak{g}_{\pm\alpha}]=\mathbb{R}\alpha$. \item{\rm (2)} Let $\alpha$ and $\beta$ be two linearly independent roots of $\mathfrak{g}$. If none of the roots $\alpha\pm\beta$ is a root of $\mathfrak{g}$, then $[\mathfrak{g}_{\pm\alpha},\mathfrak{g}_{\pm\beta}]=0$; if one of $\alpha\pm\beta$, say $\gamma$, is a root, and the other is not, then $[\mathfrak{g}_{\pm\alpha},\mathfrak{g}_{\pm\beta}]= \mathfrak{g}_{\gamma}$; If both $\alpha\pm\beta$ are roots of $\mathfrak{g}$, then $[\mathfrak{g}_{\pm\alpha},\mathfrak{g}_{\pm\beta}]$ is a cone in $\mathfrak{g}_{\pm(\alpha+\beta)}+\mathfrak{g}_{\pm(\alpha-\beta)}$. \item{\rm (3)} In the second case of (2), for any nonzero vector $v\in\mathfrak{g}_{\pm\alpha}$, the linear map $\mathrm{ad}(v)$ is an isomorphism from $\mathfrak{g}_{\pm\beta}$ onto $[\mathfrak{g}_{\pm\alpha},\mathfrak{g}_{\pm\beta}]= \mathfrak{g}_{\pm\gamma}$. \end{description} \end{lemma} The root systems of compact simple Lie algebras $A_n$-$G_2$ and the presentations of root planes for the classical cases $A_n$-$D_n$ are listed in the appendix (Section \ref{appendix-section}). \subsection{The three Cases and the reversibility assumption} Keep all the above assumptions and notation. In \cite{XD2014-2}, we established the general theme for our classification of positively curved normal homogeneous Finsler spaces. The main idea can be applied to this paper. In particular, we only need to consider the following three cases: \begin{description} \item{\bf Case I.} Each root plane of $\mathfrak{h}$ is also a root plane of $\mathfrak{g}$. \item{\bf Case II.} There are roots $\alpha$ and $\beta$ of $\mathfrak{g}$ of $\mathfrak{g}$ from different simple factors, such that $\mathrm{pr}_\mathfrak{h}(\alpha)=\mathrm{pr}_\mathfrak{h}(\beta)=\alpha'$ is a root of $\mathfrak{h}$. \item{\bf Case III.} There exists a linearly independent pair of roots $\alpha$ and $\beta$ of $\mathfrak{g}$ from the same simple factor, such that $\mathrm{pr}_\mathfrak{h}(\alpha)=\mathrm{pr}_\mathfrak{h}(\beta)=\alpha'$ is a root of $\mathfrak{h}$. \end{description} In the following sections, we will restrict our discussion to reversible Finsler metrics (i.e. $F(x,y)=F(x,-y)$ for any $y\in T_x (G/H)$). The reason for adding this condition for $F$ will be explained in the next subsection. It turns out that with the reversibility assumption for $F$, Case II is the easiest. Case III contains a lot of case-by-case discussions. But in this case we can use the root $\alpha'$ of $\mathfrak{h}$ to settle the problem. Case I turns out to be very difficult, and we can only get some partial result for this case. Adding the reversibility assumption will not lose too much generality, and it provides an alternative certification that the classification result in \cite{BB76} is correct. \subsection{The key lemmas for reversible metrics} \label{subsection-key-lemmas} From now on, we will assume that $(G/H,F)$ is an odd dimensional positively curved reversible homogeneous Finsler space, with a bi-invariant orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ for the compact Lie algebra $\mathfrak{g}=\mathrm{Lie}(G)$, and a fundamental Cartan subalgebra $\mathfrak{t}$. Keep all the relevant notation as before. In the following three lemmas we will present some results on the $g_u^F$-orthogonal (i.e. with respect to the inner product $\langle\cdot,\cdot\rangle_u^F$) decomposition of $\mathfrak{m}$. These lemmas are crucial for our later discussions. \begin{lemma} \label{lemma-3-7} Keep the above assumptions and notation. \begin{description} \item{\rm (1)}\quad Let $u$ be a nonzero vector in $\hat{\mathfrak{g}}_{0}\subset\mathfrak{m}$. Then $\mathfrak{m}$ has a $g_u^F$-orthogonal decomposition as the sum of all $\hat{\mathfrak{m}}_{\pm\alpha'}=\hat{\mathfrak{g}}_{\pm\alpha'}\cap \mathfrak{m}$, $\alpha'\in\mathfrak{t}\cap\mathfrak{h}$. In particular, $\hat{\mathfrak{m}}_0=\hat{\mathfrak{g}}_0$. \item{\rm (2)}\quad If $\dim\hat{\mathfrak{g}}_{0}=3$, then there is a fundamental Cartan subalgebra $\mathfrak{t}$, such that for any nonzero vector $u\in\mathfrak{t}\cap\mathfrak{m}$, we have $\langle\mathfrak{t}\cap\mathfrak{m},\mathfrak{g}_{\pm\alpha}\rangle_u^F=0$, where $\alpha$ is the root in $\mathfrak{t}\cap\mathfrak{m}$. \end{description} \end{lemma} \begin{proof} (1) Let $T_H$ be the torus in $H$ with $\mathrm{Lie}(T_H)=\mathfrak{t}\cap\mathfrak{h}$. Since both $F$ and $u\in\hat{\mathfrak{g}}_0$ are $\mathrm{Ad}(T_H)$-invariant, the inner product $\langle\cdot,\cdot\rangle_u^F$ is also $\mathrm{Ad}(T_H)$-invariant. The summands given in the decomposition correspond to different irreducible representations of $T_H$, thus it is a $g_u^F$-orthogonal decomposition. (2) Choose the $F$-unit vector $u\in\hat{\mathfrak{g}}_0$ such that $||u||_{\mathrm{bi}}$ reaches the maximum among all $F$-unit vectors in $\hat{\mathfrak{g}}_0$. Then $\mathfrak{t}_0=\mathfrak{t}\cap\mathfrak{h}+\mathbb{R}u$ is also a fundamental Cartan subalgebra of $\mathfrak{g}$. Notice that for $\alpha'\in\mathfrak{t}\cap\mathfrak{h}$, the subspace $\hat{\mathfrak{g}}_{\pm\alpha'}$ does not change when $\mathfrak{t}$ is replaced with $\mathfrak{t}_0$. The bi-invariant orthogonal complement $u^\perp\cap\hat{\mathfrak{g}}_0$ of $u$ in $\hat{\mathfrak{g}}_0$ is a root plane $\mathfrak{g}_{\pm\alpha}$ for $\mathfrak{t}_0$. Then our assumption on $u$ implies that $$\langle\mathfrak{t}_0\cap\mathfrak{m},\mathfrak{g}_{\pm\alpha}\rangle_u^F =\langle \mathbb{R}u,u^\perp\cap\mathfrak{g}_0\rangle_u^F=0.$$ This completes the proof of the lemma.\end{proof} \begin{lemma}\label{lemma-3-8} Keep the above assumptions and notation. Let $u\in\mathfrak{m}$ be a nonzero vector in a root plane $\hat{\mathfrak{m}}_{\pm\alpha'}$ with $\alpha'\neq 0$. Denote the bi-invariant orthogonal complement of $\alpha$ in $\mathfrak{t}\cap\mathfrak{h}$ as $\mathfrak{t}'$, and the bi-invariant orthogonal projection to $\mathfrak{t}'$ as $\mathrm{pr}_{\mathfrak{t}'}$. Then $\mathfrak{m}$ can be $g_u^F$-orthogonally decomposed as the sum of \begin{eqnarray*} \hat{\hat{\mathfrak{m}}}_{\pm\gamma''}&=& (\sum_{\mathrm{pr}_{\mathfrak{t}'}(\gamma)=\gamma''}\mathfrak{g}_{\pm\gamma})\ \cap\mathfrak{m} =\sum_{\mathrm{pr}_{\mathfrak{t}'}(\gamma')=\gamma''} (\hat{\mathfrak{g}}_{\pm\gamma'}\cap\mathfrak{m})\\ &=&(\sum_{\gamma\in\tau+\mathbb{R}\alpha+\mathfrak{t}\cap\mathfrak{m}} \mathfrak{g}_{\pm\gamma})\cap\mathfrak{m}, \end{eqnarray*} where $\tau$ is a root of $\mathfrak{g}$ with $\mathrm{pr}_{\mathfrak{t}'}(\tau)=\gamma''$. In particular, $\hat{\hat{\mathfrak{m}}}_0= (\mathop\sum\limits_{\gamma\in\mathbb{R}\alpha+\mathfrak{t}\cap\mathfrak{m}} \mathfrak{g}_{\pm\gamma})\cap\mathfrak{m}$. \end{lemma} \begin{proof} Let $T'$ be the torus in $H$ with $\mathrm{Lie}(T')=\mathfrak{t}'$. Since both $F$ and $u$ are $\mathrm{Ad}(T')$-invariant, the inner product $\langle\cdot,\cdot\rangle_u^F$ on $\mathfrak{m}$ is also $\mathrm{Ad}(T')$-invariant. The summands given in the decomposition correspond to different irreducible representations of $T'$, thus it is an orthogonal decomposition with respect to $\langle\cdot,\cdot\rangle_u^F$. \end{proof} The following lemma does not hold in general without the reversibility assumption. \begin{lemma} \label{lemma-3-6} Keep the above assumptions and notation. Then for any nonzero vector $u\in\hat{\mathfrak{m}}_{\pm\alpha'}= \hat{\mathfrak{g}}_{\pm\alpha'}\cap\mathfrak{m}$ with $\alpha'\neq 0$, and any $\beta'\in\mathfrak{t}\cap\mathfrak{h}$ which is not an even multiple of $\alpha'$, we have $$\langle\hat{\mathfrak{m}}_{\pm\beta'},\hat{\mathfrak{g}}_0 \rangle_u^F=0.$$ In particular, we have $$\langle \hat{\mathfrak{m}}_{\pm\alpha'}, \hat{\mathfrak{g}}_{0}\rangle_u^F=0.$$ \end{lemma} \begin{proof} Without losing generality, we can assume that $\hat{\mathfrak{m}}_{\pm\beta'}\ne 0$. Then $\dim\hat{\mathfrak{m}}_{\pm\beta'}=2k>0$ is even. Hence there exists an element $g$ in the maximal torus $T_H$ of $H$, and a bi-invariant orthonormal basis $\{u_1,v_1,u_2,v_2,\ldots,u_k,v_k\}$ of $\hat{\mathfrak{g}}_{\pm\beta'}\cap\mathfrak{m}$ such that $\mathrm{Ad}(g)|_{\hat{\mathfrak{g}}_{\pm\alpha'}}=-\mathrm{Id}$, $\mathrm{Ad}(g)|_{\hat{\mathfrak{g}}_{0}\cap\mathfrak{m}}=\mathrm{Id}$, and for each $i$, $\mathrm{Ad}(g)|_{\mathbb{R}u_i+\mathbb{R}v_i}$ is the anticlockwise rotation $R(\theta)$ with angle $\theta\in (0,2\pi)$. Since $F$ is $\mathrm{Ad}(g)$-invariant, for any $w_1\in\mathbb{R}u_i+\mathbb{R}v_i$ and $w_2\in\in\hat{\mathfrak{g}}_{0}\cap\mathfrak{m}$, we have $$\langle w_1,w_2\rangle_u^F= \langle\mathrm{Ad}(g)w_1,\mathrm{Ad}(g)w_2\rangle_{\mathrm{Ad}(g)u}^F =\langle R(\theta)w_1,w_2\rangle_{-u}^F=\langle R(\theta)w_1,w_2\rangle_u^F.$$ Repeating this procedure, we get $\langle w_1,w_2\rangle_u^F=\langle R(n\theta)w_1,w_2\rangle_u^F$ for each $n\in\mathbb{N}$. So $$\langle w_1,w_2\rangle_u^F=\lim_{n\rightarrow\infty} \langle\frac1n(R(\theta)w_1+\cdot+R(n\theta)w_1),w_2\rangle_u^F=0.$$ Now the above argument holds for any $i$ between $1$ to $k$. This proves the lemma. \end{proof} The following two lemmas will be repeatedly used in our later discussion. \begin{lemma}\label{key-lemma-1} Let $F$ be a positively curved homogeneous Finsler metric on the odd dimensional coset space $G/H$. Keep all the relevant notation as before. If $\alpha$ is a root of $\mathfrak{g}$ contained in $\mathfrak{t}\cap\mathfrak{h}$, and it is the only root of $\mathfrak{g}$ contained in $\alpha+(\mathfrak{t}\cap\mathfrak{m})$, then it is a root of $\mathfrak{h}$ and we have $\mathfrak{h}_{\pm\alpha}=\hat{\mathfrak{g}}_{\pm\alpha} =\mathfrak{g}_{\pm\alpha}$. \end{lemma} \begin{proof} We only need to prove that $\alpha$ is a root of $\mathfrak{h}$. The other statement follows easily. Assume conversely that $\alpha$ is not a root of $\mathfrak{h}$. Then $\mathfrak{g}_{\pm\alpha}=\hat{\mathfrak{g}}_{\pm\alpha}$ is contained in $\mathfrak{m}$. By (2) of Lemma \ref{lemma-3-7}, if $\dim\hat{\mathfrak{g}}_0=3$, then there exists a fundamental Cartan subalgebra $\mathfrak{t}$ and a nonzero $u$ in $\mathfrak{t}\cap\mathfrak{m}$, such that \begin{equation}\label{2100} \langle u^\perp\cap\hat{\mathfrak{g}}_0,u\rangle_u^F=0, \end{equation} where $u^\perp\cap\hat{\mathfrak{g}}_0$ is the bi-invariant orthogonal complement of $u$ in $\hat{\mathfrak{g}}_0$. Let $v$ be a nonzero vector in $\mathfrak{g}_{\pm\alpha}$. Since $\alpha\in\mathfrak{t}\cap\mathfrak{h}$, it is easy to see that $u$ and $v$ are linearly independent and commutative. Let $\alpha'=\mathrm{pr}_\mathfrak{h}(\alpha)$. Then a direct calculation shows that \begin{equation*} [u,\mathfrak{m}]_{\mathfrak{m}}\subset u^\perp\cap\hat{\mathfrak{g}}_0 +\sum_{\gamma'\neq\alpha'} \hat{\mathfrak{g}}_{\pm\gamma'}. \end{equation*} Thus by (\ref{2100}) and (1) of Lemma \ref{lemma-3-7}, we have \begin{equation}\label{2101} \langle[u,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F= \langle[u,\mathfrak{m}]_\mathfrak{m},v\rangle_u^F=0. \end{equation} On the other hand, a direct calculation also shows that \begin{equation*} {[}v,\mathfrak{m}{]}_\mathfrak{m}\subset\sum_{\gamma'\neq 0} \hat{\mathfrak{g}}_{\pm\gamma'}. \end{equation*} Hence by (1) of Lemma \ref{lemma-3-7}, we have \begin{equation}\label{2102} \langle[v,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F=0. \end{equation} Taking the summation of (\ref{2101}) and (\ref{2102}), we get $U(u,v)=0$. Hence by Theorem \ref{flag-curvature-formula-thm}, we have $K^F(o,u,u\wedge v)=0$. This is a contradiction. \end{proof} \begin{lemma} \label{key-lemma-2} Let $F$ be a reversible positively curved homogeneous Finsler metric on an odd dimensional coset space $G/H$. Keep all the relevant notation as before. Then there does not exist a pair of linearly independent roots $\alpha$ and $\beta$ of $\mathfrak{g}$ such that the following (1)-(4) hold simultaneously: \begin{description} \item{\rm (1)}\quad Neither $\alpha$ nor $\beta$ is a root of $\mathfrak{h}$; \item{\rm (2)}\quad None of $\alpha\pm\beta$ is a root of $\mathfrak{g}$; \item{\rm (3)}\quad $\pm\alpha$ are the only roots of $\mathfrak{g}$ in $\mathbb{R}\alpha+\mathfrak{t}\cap\mathfrak{m}$; \item{\rm (4)}\quad $\pm\beta$ are the only roots of $\mathfrak{g}$ in $\mathbb{R}\alpha\pm\beta+\mathfrak{t}\cap\mathfrak{m}$. \end{description} \end{lemma} \begin{proof} Assume conversely that there are roots $\alpha$ and $\beta$ of $\mathfrak{g}$ satisfying (1)-(4) of the lemma. Denote $\alpha'=\mathrm{pr}_{\mathfrak{h}}(\alpha)$ and $\beta'=\mathrm{pr}_{\mathfrak{h}}(\beta)$. Then $\mathfrak{g}_{\pm\alpha}$ must be contained in $\mathfrak{m}$, otherwise by (3) of the lemma, $\mathfrak{g}_{\pm\alpha}=\hat{\mathfrak{g}}_{\pm\alpha'}$ is a root plane in $\mathfrak{h}$, hence $\alpha\subset[\mathfrak{g}_{\pm\alpha},\mathfrak{g}_{\pm\alpha}] \subset\mathfrak{h}$ is a root of $\mathfrak{h}$, which is a contradiction to (1). Similarly, by (4) of the lemma, $\mathfrak{g}_{\pm\beta}=\hat{\mathfrak{g}}_{\pm\beta'}$ is also contained in $\mathfrak{m}$. First we consider the case that $\alpha'\neq 0$, i.e., $\alpha$ is not contained by $\mathfrak{t}\cap\mathfrak{m}$. Let $u$ and $v$ be any nonzero vectors in $\mathfrak{g}_{\pm\alpha}$ and $\mathfrak{g}_{\pm\beta}$ respectively. By (1) of the lemma and the above argument, they must be linearly independent and commutative. Let $u'$ be another nonzero vector in $\mathfrak{g}_{\pm\alpha}$ such that $\langle u,u'\rangle_{\mathrm{bi}}$=0. By the $\mathrm{Ad}(T_H)$-invariance of $F|_{\mathfrak{g}_{\pm\alpha}}$, it coincides with the restriction of the bi-invariant inner product up to a scalar. So we have \begin{equation}\label{2109} \langle u^\perp\cap\mathfrak{g}_{\pm\alpha},u\rangle_u^F=\langle\mathbb{R}u',u\rangle_u^F=0, \end{equation} where $u^\perp\cap\mathfrak{g}_{\pm\alpha}=\mathbb{R}u'$ is the bi-invariant orthogonal complement of $u$ in $\mathfrak{g}_{\pm\alpha}$. Let $\mathfrak{t}'$ be the bi-invariant orthogonal complement of $\alpha$ in $\mathfrak{h}$, and $\mathrm{pr}_{\mathfrak{t}'}$ be the orthogonal projection to $\mathfrak{t}'$ with respect to the bi-invariant inner product. By Lemma \ref{lemma-3-8}, $\mathfrak{m}$ can be $g_u^F$-orthogonally decomposed as the sum of $$\hat{\hat{\mathfrak{m}}}_{\pm\gamma''}= (\sum_{\mathrm{pr}_{\mathfrak{t}'}(\gamma)=\gamma''}\mathfrak{g}_{\gamma}) \cap\mathfrak{m}$$ for all different $\{\pm\gamma''\}\subset\mathfrak{t}'$. In particular, (3) and (4) of the lemma indicates that \begin{equation}\label{2110} \hat{\mathfrak{g}}_0=\mathfrak{t}\cap\mathfrak{m},\mbox{ } \hat{\hat{\mathfrak{m}}}_0=\mathfrak{t}\cap\mathfrak{m}+\mathfrak{g}_{\pm\alpha}, \mbox{ and } \hat{\hat{\mathfrak{m}}}_{\pm\beta''}= \mathfrak{g}_{\pm\beta}, \end{equation} where $\beta''=\mathrm{pr}_{\mathfrak{t}'}(\beta)$. Now (1), (2) of the lemma and a direct calculation implies that $$[u,\mathfrak{m}]\subset\mathfrak{t}\cap\mathfrak{m}+u^\perp\cap\mathfrak{g}_{\pm\alpha} +\sum_{\gamma''\neq 0,\gamma''\neq\pm\beta'',} \hat{\hat{\mathfrak{m}}}_{\pm\gamma''}.$$ So by Lemma \ref{lemma-3-6}, Lemma \ref{lemma-3-8} and (\ref{2109}), we have \begin{equation}\label{2120} \langle[u,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F= \langle[u,\mathfrak{m}]_\mathfrak{m},v\rangle_u^F=0. \end{equation} On the other hand, a direct calculation also shows that $$[v,\mathfrak{m}]_\mathfrak{m}\subset \hat{\mathfrak{g}}_0 + \sum_{\gamma''\neq 0}\hat{\hat{\mathfrak{m}}}_{\pm\gamma''}.$$ Thus by Lemma \ref{lemma-3-6} and Lemma \ref{lemma-3-8}, we have \begin{equation}\label{2121} \langle[v,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F=0. \end{equation} Taking the summation of (\ref{2120}) and (\ref{2121}), we get $U(u,v)=0$. Hence by Theorem \ref{flag-curvature-formula-thm}, we have $K^F(o,u,u\wedge v)=0$. This is a contradiction. \end{proof} Notice that Lemmas \ref{lemma-3-7}, \ref{lemma-3-8} and \ref{key-lemma-1} does not require $F$ to be reversible. For most cases in later discussions, the key lemmas will be enough to deduce our classification. But in some cases (Subsection 5.5 for example), we need to use Theorem \ref{flag-curvature-formula-thm} to deduce some more delicate results to complete the proofs. \section{Case III: the general reduction and the classical groups} In this section, we consider the Case III for classical groups. \subsection{The general reduction} Assume that $(G/H,F)$ is an odd dimensional positively curved reversible homogeneous Finsler space in Case III, i.e., with respect to a bi-invariant decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ for the compact Lie algebra $\mathfrak{g}=\mathrm{Lie}(G)$, and a fundamental Cartan subalgebra $\mathfrak{t}$, there exists a pair of roots $\alpha$ and $\beta$ of $\mathfrak{g}$ from the same simple factor, with $\alpha\neq\pm\beta$, such that $\mathrm{pr}_\mathfrak{h}(\alpha)=\mathrm{pr}_\mathfrak{h}(\beta)=\alpha'$ is a root of $\mathfrak{h}$. Obviously, in this case $\mathfrak{t}\cap\mathfrak{m}$ is spanned by $\alpha-\beta$. We first prove the following lemma. \begin{lemma}\label{lemma-4-1} Let $(G/H,F)$ be an odd dimensional positively curved reversible homogeneous Finsler space in Case III. Keep all the relevant notationa. Then $(G/H,F)$ is equivalent to a positively curved reversible homogeneous Finsler space $(G'/H',F')$ in which $G'$ is a compact simple Lie group. \end{lemma} \begin{proof} Suppose $\mathfrak{g}$ has a direct sum decomposition as $$\mathfrak{g}=\mathfrak{g}_0\oplus\mathfrak{g}_1\cdots\oplus\mathfrak{g}_n,$$ where $\mathfrak{g}_0$ is an abelian subalgebra, and for $i>0$, $\mathfrak{g}_i$ is a simple ideal of $\mathfrak{g}$. Let $\alpha$ and $\beta$ be two roots of $\mathfrak{g}_1$. Then obviously the abelian factor $\mathfrak{g}_0$ is contained in $\mathfrak{h}$. Let $\gamma$ be a root of $\mathfrak{g}_i$ with $i>1$. Then $\gamma$ is the only root contained in $\gamma+\mathfrak{t}\cap\mathfrak{m}$. Thus by Lemma \ref{key-lemma-1}, $\gamma$ is a root of $\mathfrak{h}$ and $\mathfrak{g}_{\pm\gamma}=\mathfrak{h}_{\pm\gamma}$ is contained in $\mathfrak{h}$. Since the simple factor $\mathfrak{g}_i$, $i>1$ is algebraically generated by its root planes, we have $\mathfrak{g}_i\subset\mathfrak{h}$ for $i>1$. Let $G'/H'$ be the homogeneous space corresponding to the pair $(\mathfrak{g}_1,\mathfrak{h}_1)$. Then $G'/H'$ admits a homogeneous Finsler metric $F'$ naturally induced by $F$, such that $(G/H,F)$ is equivalent to $(G'/H',F')$. This completes the proof of the lemma. \end{proof} Since Lemma \ref{key-lemma-1} holds without the reversible assumption, Lemma \ref{lemma-4-1} is also valid for non-reversible metrics. In the following we will start a case by case consideration of the compact simple Lie algebras. However, there are some common situations which can be uniformly dealt with. We summarize them as the following lemma. \begin{lemma}\label{lemma-999} Let $(G/H,F)$ be an odd dimensional positively curved reversible homogeneous Finsler space in Case III, with compact simple Lie algebra $\mathfrak{g}=\mathrm{Lie}(G)$. Then for any two different roots $\alpha$ and $\beta$ such that $\mathrm{pr}_\mathfrak{h}(\alpha)=\mathrm{pr}_{\mathfrak{h}}(\beta)=\alpha'$ is a root of $\mathfrak{h}$, the angle between $\alpha$ and $\beta$ can not be $\frac{\pi}{3}$ or $\frac{2 \pi}{3}$. \end{lemma} \begin{proof} First we assume that $\mathfrak{g}\neq \mathfrak{g}_2$ and prove that the angle between $\alpha$ and $\beta$ can not be $\frac \pi 3$. Assume conversely that the angle between $\alpha$ and $\beta$ is $\frac \pi 3$. Let $\mathfrak{t}'=\alpha'^\perp\cap\mathfrak{t}\cap\mathfrak{h}= (\mathbb{R}\alpha+\mathbb{R}\beta)^\perp\cap\mathfrak{t} $ be the bi-invariant orthogonal complement of $\alpha'$ in $\mathfrak{t}\cap\mathfrak{h}$, and $T'$ be the corresponding torus in $H$. Notice that there is a decomposition $\mathrm{Lie}(C_G(T'))=\mathfrak{t}'\oplus A_2$, such that $\alpha$ and $\beta$ are roots of the $A_2$-factor. By Lemma \ref{totally-geodesic-lemma}, there is a positively curved homogeneous Finsler space $(G''/H'',F'')$, where $\mathfrak{g}''=\mathrm{Lie}(G'')=\mathfrak{su}(3)$, and $\mathfrak{h}''=\mathrm{Lie}(H'')=A_1$ is linearly spanned by $$ w_1=\sqrt{-1}\left( \begin{array}{ccc} -2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right), w_2=\sqrt{-1}\left( \begin{array}{ccc} 0 & \bar{a} & \bar{b} \\ a & 0 & 0 \\ b & 0 & 0 \\ \end{array} \right), \mbox{ and }$$ $$w_3=\frac13[w_1,w_2]=\left( \begin{array}{ccc} 0 & \bar{a} & \bar{b} \\ -a & 0 & 0 \\ -b & 0 & 0 \\ \end{array} \right), $$ where $a,b\in\mathbb{C}$ and $(a,b)\neq (0,0)$. But then $[w_3,w_1]$ is not contained in $\mathfrak{h}''$. This is a contradiction. Now we prove that the angle between $\alpha$ and $\beta$ can not be $\frac{2\pi}{3}$. Assume conversely that it is $\frac{2\pi}{3}$. Then $\alpha'=\frac12(\alpha+\beta)$ is a root of $\mathfrak{h}$. But then $\gamma=2\alpha'=\alpha+\beta$ is a root of $\mathfrak{g}$ contained in $\mathfrak{t}\cap\mathfrak{h}$, and it is the only root contained in $\gamma+(\mathfrak{t}\cap\mathfrak{m})$. So by Lemma \ref{key-lemma-1}, $\gamma=2\alpha'$ is also a root of $\mathfrak{h}$. This is a contradiction. Finally, we assume that $\mathfrak{g}=\mathfrak{g}_2$ and prove that the angle between $\alpha$ and $\beta$ can not be $\frac \pi 3$. If $\alpha$ and $\beta$ are short roots, then they can be replaced with two long roots with angle $\frac{2\pi}{3}$, which has already been proven to be impossible. If $\alpha$ and $\beta$ are long roots, then $\alpha'=\frac12 (\alpha+\beta)$ is a root of $\mathfrak{h}$. By Lemma \ref{key-lemma-1} and a similar argument as above, the short root $\gamma=\frac13(\alpha+\beta)=\frac23\alpha'$ is also a root of $\mathfrak{h}$. This is a contradiction. \end{proof} Now we start the case by case discussion. Notice that in the following, we always assume that the relevant coset space has been endowed with an invariant reversible Finsler metric with positive flag curvature. If a contradiction arises, then we can conclude that the coset space cannot be positively curved in the reversible homogeneous sense. In each case, we use the standard presentation of the root systems (see Section \ref{appendix-section}), and divide the discussion into subcases with respect to the rank of $G$, the long/short roots choices of $\alpha$ and $\beta$ and the angle between $\alpha$ and $\beta$. Using the Weyl group actions and more outer automorphisms for $D_n$ and $E_6$, the subcases can be reduced to the following. \subsection{The case $\mathfrak{g}=A_n$} We only need to consider the following subcases. {\bf Subcase 1.}\quad $n=3$, and $\alpha=e_1-e_4$, $\beta=e_3-e_2$. In this case, we have $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2-e_3-e_4)$ and it is easy to see that $\alpha'=\frac12(e_1-e_2+e_3-e_4)$ is a root of $\mathfrak{h}$. By Lemma \ref{key-lemma-1}, $e_1-e_2$ and $e_3-e_4$ are roots of $\mathfrak{h}$. Notice that $\hat{\mathfrak{g}}_{\pm(e_1-e_2)}= \mathfrak{g}_{\pm(e_1-e_2)}$ is a root plane of $\mathfrak{h}$. Let $\beta'=\frac12(-e_1+e_2+e_3-e_4)\in\mathfrak{t}\cap\mathfrak{h}$. Then any non zero $u\in\mathfrak{g}_{\pm(e_1-e_2)}\subset\mathfrak{h}$ defines a linear isomorphism \begin{equation}\label{0008} \mathrm{ad}(u):\hat{\mathfrak{g}}_{\pm\alpha'}=\mathfrak{g}_{\pm(e_1-e_4)}+\mathfrak{g}_{\pm(e_2-e_3)} \to\hat{\mathfrak{g}}_{\pm\beta'}=\mathfrak{g}_{\pm(e_2-e_4)}+\mathfrak{g}_{\pm(e_1-e_3)}. \end{equation} Since $u\in\mathfrak{h}$, $\mathrm{ad}(u)$ preserves the bi-invariant orthogonal decomposition. So $\beta'=\frac12(-e_1+e_2+e_3-e_4)$ is also a root of $\mathfrak{h}$. Now we prove that $\mathfrak{h}=B_2$ and its root system is $$\{\pm(e_1-e_2),\pm(e_3-e_4), \pm\alpha',\pm\beta'\}.$$ Now we prove that up to conjugation, $\mathfrak{h}$ is uniquely determined. By (\ref{0008}), it is easy to see that $\mathfrak{h}$ is uniquely determined by $\mathfrak{h}_{\pm\alpha'}$. Let $\mathfrak{g}'$ be the subalgebra of $\mathfrak{g}$ isomorphic to $A_1\oplus A_1$, defined by $$\mathfrak{g}'=\mathbb{R}\alpha+\mathbb{R}\beta+ \mathfrak{g}_{\pm(e_1-e_4)}+\mathfrak{g}_{\pm(e_2-e_3)},$$ and let $\mathfrak{h}'$ be the subalgebra of $\mathfrak{g}'$ defined by $\mathfrak{h}'=\mathbb{R}\alpha'+\mathfrak{h}_{\pm\alpha'}$. Suppose $\mathfrak{t}'=\mathfrak{t}\cap\mathfrak{g}'=\mathbb{R}\alpha+\mathbb{R}\beta$ is a fundamental Cartan subalgebra of $\mathfrak{g}'$. Then we also have the induced bi-invariant orthogonal decomposition $\mathfrak{g}'=\mathfrak{h}'+\mathfrak{m}'$, such that $\mathfrak{m}'=\mathfrak{m}\cap\mathfrak{g}'$ and $\mathfrak{t}'\cap\mathfrak{m}'=\mathfrak{t}\cap\mathfrak{m}$. Notice also that $\mathfrak{h}'$ can not have nonzero intersection with either of the two simple factors of $\mathfrak{g}'$, otherwise, by $\mathrm{Ad}(\exp\mathfrak{h}')$-actions, the whole subalgebra $\mathfrak{h}'$ coincides with that factor, which is a contradiction with the fact that $\mathfrak{h}'$ is diagonal in $\mathfrak{g}'$. The following lemma will be useful. \begin{lemma} \label{diagonal-A-1-conjugation-lemma} Let $\mathfrak{g}'=\mathfrak{g}_1\oplus\mathfrak{g}_2=A_1\oplus A_1$ be endowed with a bi-invariant inner product. Assume that $\mathfrak{t}'$ is a Cartan subalgebra, and $\mathfrak{h}'$ and $\mathfrak{h}''$ are subalgebras of $\mathfrak{g}'$ isomorphic to $A_1$ satisfying the following conditions: \begin{description} \item{\rm (1)}\quad $\mathfrak{h}'\cap\mathfrak{t}'=\mathfrak{h}''\cap\mathfrak{t}'$ is one dimensional. \item{\rm (2)}\quad $\mathfrak{h}'\cap\mathfrak{g}_i= \mathfrak{h}''\cap\mathfrak{g}_i=0$, $i=1$, $2$. \item{\rm (3)}\quad $\mathfrak{h}'\cap(\mathfrak{h}'\cap\mathfrak{t}')^\perp \subset\mathfrak{t}'^\perp$, and $\mathfrak{h}''\cap(\mathfrak{h}''\cap\mathfrak{t}')^\perp\subset \mathfrak{t}'^\perp$, where the orthogonal complements are taken with respect to the chosen bi-invariant inner product on $\mathfrak{g}$. \end{description} Then there is an $\mathrm{Ad}(\exp\mathfrak{t}')$-action which maps $\mathfrak{h}'$ onto $\mathfrak{h}''$. \end{lemma} \begin{proof} We first give a definition. For a compact Lie algebra of type $A_1$ endowed with a bi-invariant inner product, we call an orthogonal basis $\{u_1,u_2,u_3\}$ standard, if all the basis vectors have the same length, and they satisfy the condition $[u_i,u_j]=u_k$ for $(i,j,k)=(1,2,3)$, $(2,3,1)$ or $(3,1,2)$. The length $c$ of each $u_i$ is a constant which only depends on the scale of the bi-invariant inner product. The bracket $u'_3=[u'_1,u'_2]$ of any two orthogonal vectors with length $c$ is also a vector with length $c$, and $\{u'_1,u'_2,u'_3\}$ is a standard basis as well. Now we go back to the proof. Let $c_1$ and $c_2$ be the length of the standard basis vectors for $\mathfrak{g}_1$ and $\mathfrak{g}_2$, respectively. Then we can choose standard bases $\{u_1,u_2,u_3\}$ and $\{v_1,v_2,v_3\}$ for $\mathfrak{g}_1$ and $\mathfrak{g}_2$, respectively, as follows. First, we choose vectors $u_1$ and $v_1$ from $\mathfrak{t}'\cap\mathfrak{g}_1$ and $\mathfrak{t}'\cap\mathfrak{g}_2$ with length $c_1$ and $c_2$, respectively. Then we freely choose any vectors $u_2$ of length $c_1$ from $\mathfrak{t}'^\perp\cap\mathfrak{g}_1$ and set $u_3=[u_1,u_2]$. By (2) and (3) in the lemma, we can find a vector of $\mathfrak{h}'$ from $u_2+\mathfrak{g}_2\cap\mathfrak{t}'^\perp$. Then its $\mathfrak{g}_2$-factor is not $0$, which can be positively scaled to a vector $v_2$ with the length $c_2$. Then $v_1, v_2, v_3=[v_1,v_2]$ form a standard basis for $\mathfrak{g}_2$. Now suppose $\mathfrak{h}'$ is linearly spanned by $u_1+av_1$, $u_2+bv_2$ and their bracket can be expressed as $$[u_1+av_1,u_2+bv_2]=u_3+ab v_3,$$ where $a$ is a fixed nonzero constant and $b>0$. As a Lie algebra, $\mathfrak{h}'$ contains $[u_2+bv_2,u_3+abv_3]=u_1+ab^2 v_1$, hence $b=1$. With $\mathfrak{h}'$ changed to $\mathfrak{h}''$, the same argument above can also give standard bases $\{u'_1,u'_2,u'_3\}$ and $\{v'_1,v'_2,v'_3\}$ for $\mathfrak{g}_1$ and $\mathfrak{g}_2$, respectively, such that $u'_i=u_i$ for each $i$, and $v'_1=v_1$. Then it is easy to see that there exists a real number $t$ such that $\mathrm{Ad}(\exp(tv_1))$ maps $v_2$ to $v'_2$, $v_3$ to $v'_3$, and keep $v_1$ and all the vectors $u_i$ unchanged. So it maps $\mathfrak{h}'$ isomorphically to $\mathfrak{h}''$. \end{proof} By Lemma \ref{diagonal-A-1-conjugation-lemma}, it is easy to see that, up to the $\mathrm{Ad}(\exp\mathfrak{t})$-actions which preserve all the roots and root planes of $\mathfrak{g}$, $\mathfrak{h}_{\pm\alpha'}$ is uniquely determined. So $\mathfrak{h}$ is conjugate to the standard subalgebra $\mathfrak{sp}(2)$ in $\mathfrak{su}(4)$ which makes $G/H$ a symmetric space. Since $A_3=D_3$, $G/H$ is equivalent to the standard Riemannian sphere $S^5=\mathrm{SO}(6)/\mathrm{SO}(5)$ with constant positive curvature. In this subcase, we can also directly prove that $G/H$ is a symmetric homogeneous space, that is, $[\mathfrak{m},\mathfrak{m}]\subset\mathfrak{h}$, and then apply the classification of symmetric homogeneous spaces to get the classification. However, this argument is not valid for some other subcases below. {\bf Subcase 2.}\quad $n=4$, and $\alpha=e_1-e_4$, $\beta=e_3-e_2$. In this case, we have $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2-e_3-e_4)$, and it is easily seen that $\alpha'=\frac12(e_1-e_2+e_3-e_4)$ is a unit root of $\mathfrak{h}$. Notice that $\mathrm{pr}_\mathfrak{h}(e_i-e_5)$, $1\leq i\leq 4$, can not be a root of $\mathfrak{h}$ since it is not orthogonal to $\alpha'$ and its length is $\frac{\sqrt{7}}2$. Thus any root of $\mathfrak{h}$ must be of the form $\mathrm{pr}_\mathfrak{h}(e_i-e_j)$ with $1\leq i<j\leq 4$. A similar argument as in Subcase 1 then shows that the root system of $\mathfrak{h}$ is of type $B_2=C_2$, i.e., up to the $\mathrm{Ad}(\mathrm{SU}(4))$-actions, $\mathfrak{h}=\mathbb{R}(e_1+e_2+e_3+e_4-4e_5)\oplus\mathfrak{h}'$, where $\mathfrak{h}'$ is the standard subalgebra $\mathfrak{sp}(2)$ in $\mathfrak{su}(4)$ corresponding to $e_i$ with $1\leq i\leq 4$. So $G/H$ is equivalent to the Berger's space $\mathrm{SU}(5)/\mathrm{Sp}(2)\mathrm{U}(1)$, which admits positive curved normal homogeneous (Riemannian) metrics. {\bf Subcase 3.}\quad $n>4$, and $\alpha=e_1-e_4$, $\beta=e_3-e_2$. We have $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2-e_3-e_4)$, and it is easily seen that $\alpha'=\frac12(e_1-e_2+e_3-e_4)$ is a unit root of $\mathfrak{h}$. Then it is easy to check that the roots $\gamma_1=e_1-e_5$ and $\gamma_2=e_2-e_6$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}, hence the corresponding coset space does not admit any invariant reversible Finsler metric with positive flag curvature. \subsection{The case $\mathfrak{g}=B_n$ with $n>1$} We only need to consider the following subcases. {\bf Subcase 1.}\quad $\alpha=e_1+e_2$, $\beta=e_2$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}e_1$ and $\alpha'=e_2$ is a root of $\mathfrak{h}$, with $$\mathfrak{h}_{\pm e_2}\subset\hat{\mathfrak{g}}_{\pm e_2}=\mathfrak{g}_{\pm(e_2-e_1)}+\mathfrak{g}_{\pm e_2}+\mathfrak{g}_{\pm (e_2+e_1)}.$$ Denote $\mathfrak{g}'=\mathbb{R}e_1+\mathbb{R}e_2+ \sum_{a,b}\mathfrak{g}_{\pm(a e_1+b e_2)}$ and $\mathfrak{g}''=\mathbb{R}e_1+\mathfrak{g}_{\pm e_1}$. Then $\mathfrak{g}'$, $\mathfrak{g}''$ are Lie algebras of types $B_2=\mathfrak{so}(5)$ and $A_1$, respectively. The subalgebra $\mathfrak{h}\cap\mathfrak{g}'$ of type $A_1$ is linearly spanned by $$ u=\left( \begin{array}{ccccc} 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 \\ 0 & 0 & 0 & 1 & 0 \\ \end{array} \right)\in\mathbb{R}e_2,\quad v=\left( \begin{array}{ccccc} 0 & 0 & 0 & -a & -a' \\ 0 & 0 & 0 & -b & -b' \\ 0 & 0 & 0 & -c & -c' \\ a & b & c & 0 & 0 \\ a' & b' & c' & 0 & 0 \\ \end{array} \right)\in\mathfrak{h}_{\pm e_2}, $$ and $$ w=[u,v]=\left( \begin{array}{ccccc} 0 & 0 & 0 & a' & -a \\ 0 & 0 & 0 & b' & -b \\ 0 & 0 & 0 & c' & -c \\ -a' & -b' & -c' & 0 & 0 \\ a & b & c & 0 & 0 \\ \end{array} \right), $$ in which $(a,b,c,a',b',c')$ is a nonzero vector in $\mathbb{R}^6$. Since $[v,w]\in\mathfrak{h}\cap\mathfrak{g}'$, $(a,b,c)$ and $(a',b',c')$ are linearly dependent vectors. Using a suitable isomorphism $l\in\mathrm{Ad}(\exp\mathfrak{g}'')$ of $\mathfrak{g}$, we can make $b=c=b'=c'=0$, i.e., up to equivalence, we can assume that $\mathfrak{h}_{\pm e_2}=\mathfrak{g}_{\pm e_2}$. Thus $\mathfrak{g}_{\pm (e_2\pm e_1)}\in\mathfrak{m}$. By Lemma \ref{key-lemma-1}, any root $\pm e_i\pm e_j$ of $\mathfrak{g}$ with $1<i<j$ must be a root of $\mathfrak{h}$ and we have $\mathfrak{h}_{\pm(e_i\pm e_j)}=\mathfrak{g}_{\pm(e_i\pm e_j)}=\hat{\mathfrak{g}}_{\pm(e_i\pm e_j)}$. By the linear isomorphism $\mathrm{ad}(w)$ between $\hat{\mathfrak{g}}_{\pm e_2}$ and $\hat{\mathfrak{g}}_{\pm e_i}$, for any nonzero vector $w\in\mathfrak{g}_{\pm(e_2-e_i)}$ with $i>2$, we have $\mathfrak{g}_{\pm e_i}\subset\mathfrak{h}$. Moreover, for any $i\geq 2$, we have $\mathfrak{g}_{\pm(e_i\pm e_1)}\subset\mathfrak{m}$. To summarize, we have \begin{equation}\label{0009} \mathfrak{m}=\mathbb{R}e_1+\mathfrak{g}_{\pm e_1}+\sum_{i=2}^n(\mathfrak{g}_{\pm(e_i+e_1)}+ \mathfrak{g}_{\pm(e_i-e_1)}). \end{equation} Let $\{u,u'\}$ be a bi-invariant orthonormal basis of $\mathfrak{g}_{\pm (e_1+e_2)}$ and choose a nonzero vector $v$ from $\mathfrak{g}_{\pm (e_1-e_2)}$ such that $\langle u',v\rangle_u^F=0$. Since the Minkowski norm $F|_{\mathfrak{g}_{\pm(e_1+e_2)}}$ is $\mathrm{Ad}(\exp(\mathbb{R}e_2))$-invariant, it coincides with the restriction of the bi-invariant inner product up to scalar changes. So we have \begin{equation}\label{3599} \langle u',u\rangle_u^F=\langle[u,e_1],u\rangle_u^F=\langle[u,e_2],u\rangle_u^F=0. \end{equation} Now a direct calculation shows that \begin{equation*} [u,\mathfrak{m}]_\mathfrak{m}\subset \mathbb{R}[e_1,u]+\mathbb{R}e_1\subset\mathbb{R}u' +\hat{\mathfrak{g}}_{0}. \end{equation*} So by Lemma \ref{lemma-3-6}, $$\langle v,\hat{\mathfrak{g}_0}\rangle_u^F=\langle u,\hat{\mathfrak{g}}_0\rangle_u^F=0.$$ By our assumptions on $u$ and $v$, we have \begin{equation}\label{3590} \langle[u,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F =\langle\mathbb{R}u',u\rangle_u^F = 0, \end{equation} and \begin{equation}\label{3601} \langle[u,\mathfrak{m}]_\mathfrak{m},v\rangle_u^F =\langle\mathbb{R}u',u\rangle_u^F+ \langle\hat{\mathfrak{g}}_0,v\rangle_u^F=0. \end{equation} Since $e_2\in\mathfrak{h}$, by Theorem 1.3 of \cite{DH2004}, we have \begin{equation}\label{3602} \langle[e_2,v],u\rangle^{F}_u =-\langle[e_2,u],v\rangle^{F}_u-2C_u^{F}(u,v,[e_2,u]). \end{equation} By (\ref{3599}), the first term of the right side of above equation vanishes. By the property of Cartan tensor, $C^F_u(u,\cdot,\cdot)\equiv 0$, so the second term also vanishes. Thus we have \begin{equation}\label{3603} \langle [e_1,v],u\rangle_u^F=\langle[e_2,v],u\rangle_u^F=0. \end{equation} A direct calculation then shows that $$[v,\mathfrak{m}]_\mathfrak{m}\subset \mathbb{R}[e_1,v]+\hat{\mathfrak{g}}_0.$$ So by Lemma \ref{lemma-3-6} and (\ref{3603}), we have \begin{equation}\label{3604} \langle[v,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F= \langle\mathbb{R}[e_1,v], u\rangle_u^F+\langle\hat{\mathfrak{g}}_0,u\rangle_u^F=0. \end{equation} Taking the summation of (\ref{3590}), (\ref{3601}) and (\ref{3604}), we get $U(u,v)=0$. Hence by Theorem \ref{flag-curvature-formula-thm}, we have $K^F(o,u,u\wedge v)=0$. Therefore the corresponding coset space does not admit any invariant reversible Finsler metric with positive flag curvature. {\bf Subcase 2.}\quad $\alpha=e_1+e_2$, $\beta=e_2-e_1$. This subcase has been covered by Subcase 1. {\bf Subcase 3.}\quad $n=4$, and $\alpha=e_1+e_2$, $\beta=-e_3-e_4$. In this case, we have $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2+e_3+e_4)$, and it is easily seen that $\alpha'=\frac12(e_1+e_2-e_3-e_4)$ is a root of $\mathfrak{h}$ with $\mathfrak{h}_{\pm\alpha'}\subset\hat{\mathfrak{g}}_{\pm\alpha'} =\mathfrak{g}_{\pm(e_1+e_2)}+\mathfrak{g}_{\pm(e_3+e_4)}$. The argument here is very similar to Subcase 1 for $A_n$. Obviously $\mathfrak{h}_{\pm\alpha'}$ is not a root plane of $\mathfrak{g}$. By Lemma \ref{key-lemma-1}, if $1\leq i<j\leq 4$, then the root $e_i-e_j$ of $\mathfrak{g}$ is also a root of $\mathfrak{h}$ with $\mathfrak{h}_{\pm(e_i-e_j)}=\mathfrak{g}_{\pm(e_i-e_j)}= \hat{\mathfrak{g}}_{\pm(e_i-e_j)}$. Using the action $\mathrm{ad}(u)$, one easily shows that for any non zero vector $u\in\mathfrak{g}_{\pm(e_i-e_j)}\subset\mathfrak{h}$, with $(i,j)=(2,3)$ or $(2,4)$, both $\beta'=\frac12(e_1+e_3-e_2-e_4)$ and $\gamma'=\frac12(e_1+e_4-e_2-e_3)$ are also roots of $\mathfrak{h}$. Hence $\mathfrak{h}$ is of type $B_3$, and it is uniquely determined by the choice of $\mathfrak{h}_{\pm\alpha'}$. By Lemma \ref{diagonal-A-1-conjugation-lemma}, up to the $\mathrm{Ad}(G)$-action, we can assume $\mathfrak{h}$ to be the standard subalgebra such that the pair $(\mathfrak{g},\mathfrak{h})$ defines the homogeneous Finsler sphere $S^{15}=\mathrm{Spin}(9)/\mathrm{Spin}(7)$. So in this subcase $(G/H,F)$ must be equivalent to the homogeneous sphere $S^{15}=\mathrm{Spin}(9)/\mathrm{Spin}(7)$ on which there exist positively curved homogeneous Riemannian metrics. {\bf Subcase 4.} $n>4$, and $\alpha=e_1+e_2$, $\beta=-e_3-e_4$. We have $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2+e_3+e_4)$, and it is easily seen that $\alpha'=\frac12(e_1+e_2-e_3-e_4)$ is a unit root of $\mathfrak{h}$. Then the roots $\gamma_1=e_1+e_5$ and $\gamma_2=e_1-e_5$ satisfy (1)-(4) of Lemma \ref{key-lemma-2}. Hence the corresponding coset space does not admit any invariant reversible Finsler metric with positive flag curvature. {\bf Subcase 5.} $n=3$, and $\alpha=e_1+e_2$, $\beta=-e_3$. It is easily seen that $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2+e_3)$ and $\alpha'=\frac13(e_1+e_2-2e_3)$ is a root of $\mathfrak{h}$. The argument here is similar to that of Subcase 3. By Lemma \ref{key-lemma-1} and Lemma \ref{trick-lemma-0}, the root system of $\mathfrak{h}$ contains the roots $$\pm(e_i-e_j), \,\,1\leq i<j\leq 3, $$ and$$\frac13(e_1+e_2+e_3)-e_i,\,\,1\leq i\leq 3.$$ The subalgebra $\mathfrak{h}$ is of type $\mathfrak{g}_2$, and is uniquely determined by the choice of $$\mathfrak{h}_{\pm\alpha'}\subset\hat{\mathfrak{g}}_{\pm\alpha'}= \mathfrak{g}_{\pm(e_1+e_2)}+\mathfrak{g}_{\pm e_3}.$$ By Lemma \ref{diagonal-A-1-conjugation-lemma}, up to the $\mathrm{Ad}(G)$-action, there exists a unique $\mathfrak{h}$, and the corresponding coset space is the homogeneous sphere $S^7=\mathrm{Spin}(7)/\mathrm{G}_2$. Notice that in this case the isotropy action is transitive, so any homogeneous Finsler metric on it must be Riemannian with positive constant curvature. Consequently in this subcase $(G/H,F)$ is equivalent to the Riemannian homogeneous sphere $S^7=\mathrm{Spin}(7)/\mathrm{G}_2$ of positive constant curvature. {\bf Subcase 6.}\quad $n>3$, and $\alpha=e_1+e_2$, $\beta=-e_3$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2+e_3)$ and $\alpha'=\frac13(e_1+e_2-2e_3)$ is a root of $\mathfrak{h}$. The roots $\gamma_1=e_1+e_4$ and $\gamma_2=e_1-e_4$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}. Hence the corresponding coset space does not admit any invariant reversible Finsler metric with positive flag curvature. {\bf Subcase 7.}\quad $\alpha=e_1$, $\beta=e_2$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1-e_2)$ and $\alpha'=\frac12(e_1+e_2)$ is a root of $\mathfrak{h}$. By Lemma \ref{key-lemma-1}, $2\alpha'=e_1+e_2$ is also a root of $\mathfrak{h}$. Hence the corresponding coset space does not admit any invariant reversible Finsler metric with positive flag curvature. {\bf Subcase 8.}\quad $n=2$, and $\alpha=e_1+e_2$, $\beta=-e_1$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(2e_1+e_2)$ and $\alpha'=-\frac15e_1+\frac25e_2$ is a root of $\mathfrak{h}$. The subalgebra $\mathfrak{h}$ is of type $A_1$, and is uniquely determined by the choice of $\mathfrak{h}_{\pm\alpha'}$ in $\hat{\mathfrak{g}}_{\pm\alpha'}=\mathfrak{g}_{\pm(e_1+e_2)}+ \mathfrak{g}_{\pm e_1}$. By Lemma \ref{diagonal-A-1-conjugation-lemma}, $G/H$ is uniquely determined up to equivalence, i.e., it is equivalent to the Berger's space $\mathrm{Sp}(2)/\mathrm{SU}(2)$. Hence there exists positively curved normal homogeneous Riemannian metrics on it. {\bf Subcase 9.}\quad $n>2$, and $\alpha=e_1+e_2$, $\beta=-e_1$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(2e_1+e_2)$ and $\alpha'=-\frac15 e_1+\frac25 e_2$ is a root of $\mathfrak{h}$. The roots $\gamma_1=e_1+e_3$ and $\gamma_2=e_1-e_3$ satisfy (1)-(2) but does not satisfy (3) of Lemma \ref{key-lemma-2}, i.e., $\pm\gamma_1$ are the only roots of $\mathfrak{g}$ in $\mathbb{R}\gamma_1+\mathfrak{t}\cap\mathfrak{m}$, and all the roots of $\mathfrak{g}$ in $\pm\gamma_2+\mathbb{R}\gamma_1+\mathfrak{t}\cap\mathfrak{m}$ are $\pm\gamma_2=\pm(e_1-e_3)$ and $\pm\gamma_3=\pm e_2$. Choosing $u$ and $v$ from $\mathfrak{g}_{\pm\gamma_1}$ and $\mathfrak{g}_{\pm\gamma_2}$ as in the proof for Lemma \ref{key-lemma-2}, we can similarly get \begin{equation}\label{3700} \langle[u,\mathfrak{m}]_{\mathfrak{m}},u\rangle_u^F=\langle[v,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F=0. \end{equation} Notice that $\gamma_1=e_1+e_3$ and $\gamma_3=e_2$ also satisfy (1) of Lemma \ref{key-lemma-2}, i.e., $\gamma_1\pm\gamma_3$ are not roots of $\mathfrak{g}$. This fact, together with Lemma \ref{lemma-3-8}, implies that \begin{equation}\label{3701} \langle[u,\mathfrak{m}]_\mathfrak{m},v\rangle_u^F=0. \end{equation} Taking the summation of (\ref{3700}) and (\ref{3701}), we get $U(u,v)=0$. Thus by Theorem \ref{flag-curvature-formula-thm}, we have $K^F(o,u,u\wedge v)=0$. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. \subsection{The case $\mathfrak{g}=C_n$ with $n>2$} We only need to consider the following subcases. {\bf Subcase 1.}\quad $\alpha=2e_1$, $\beta=e_1+e_2$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1-e_2)$ and $\alpha'=\beta=e_1+e_2$ is a root of $\mathfrak{h}$. Let $\mathfrak{t}'$ be the subalgebra of $\mathfrak{t}\cap\mathfrak{h}$ spanned by $\{e_3,\ldots,e_n\}$, and $T'$ the corresponding sub-torus in $T\cap H$. Then the Lie algebra of $C_G(T')$ is $\mathfrak{t}'\oplus\mathfrak{g}''$, in which $\mathfrak{g}''$ is of type $B_2$. If the corresponding coset space can be positively curved, then Lemma \ref{totally-geodesic-lemma} implies that the positively curved reversible homogeneous Finsler space $\mathrm{SO}(5)/\mathrm{SO}(3)$ should appear in Subcase 1 for $B_n$, which is a contradiction. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. {\bf Subcase 2.}\quad $\alpha=2e_1$, $\beta=2e_2$. This subcase has been covered by the previous one. {\bf Subcase 3.}\quad $\alpha=2e_1$, $\beta=-e_2-e_3$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(2e_1+e_2+e_3)$ and $\alpha'=\frac23 e_1-\frac23 e_2-\frac23 e_3$ is a root of $\mathfrak{h}$. Then the roots $\gamma_1=2e_2$ and $\gamma_2=2e_3$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. {\bf Subcase 4.}\quad $\alpha=e_1+e_2$, $\beta=e_1-e_2$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}e_2$ and $\alpha'=e_1$ is a root of $\mathfrak{h}$. By Lemma \ref{key-lemma-1}, $2\alpha'=2e_1$ is also a root of $\mathfrak{h}$. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. {\bf Subcase 5.}\quad $\alpha=e_1+e_2$, $\beta=-e_3-e_4$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2+e_3+e_4)$ and $\alpha'=\frac12(e_1+e_2-e_3-e_4)$ is a root of $\mathfrak{h}$. Then the roots $\gamma_1=2e_1$ and $\gamma_2=2e_2$ satisfy (1)-(4) of Lemma \ref{key-lemma-2}. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. {\bf Subcase 6}\quad $n>2$, and $\alpha=2e_1$, $\beta=-e_1-e_2$. In this case, $\mathfrak{t} \cap\mathfrak{m}=\mathbb{R}(3e_1+e_2)$ and $\alpha'=\frac15 e_1-\frac35 e_2$ is a root of $\mathfrak{h}$. Then the roots $\gamma_1=e_1+e_3$ and $\gamma_2=2e_2$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. \subsection{The case $\mathfrak{g}=D_n$ with $n>3$} We only need to consider the following subcases. {\bf Subcase 1.} $\alpha=e_1+e_2$, $\beta=e_2-e_1$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}e_1$ and $\alpha'=e_2$ is a root of $\mathfrak{h}$. Then we can apply Lemma \ref{key-lemma-1}, Lemma \ref{trick-lemma-0} and a similar argument as in Subcase 1 for $A_n$ (which in fact is a special situation of this subcase), to show that $\mathfrak{h}$ is of type $B_{n-1}$ with all the roots given by $$\pm e_i\pm e_j\mbox{ for }1<i<j\leq n\mbox{ and }\pm e_i\mbox{ for } 1<i\leq n.$$ Using Lemma \ref{diagonal-A-1-conjugation-lemma}, we can show that, up to $\mathrm{Ad}(G)$-actions, $\mathfrak{h}$ is the standard subalgebra such that the homogeneous Finsler space $(G/H,F)$ is equivalent to the Riemannian symmetric sphere $\mathrm{SO}(2n)/\mathrm{SO}(2n-1)$ of positive constant curvature. {\bf Subcase 2.} $n\ge 4$, and $\alpha=e_1+e_2$, $\beta=-e_3-e_4$. First notice that $D_4$ has outer automorphisms. So the argument in the above subcase can be applied to this case. If $n>4$, then $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2+e_3+e_4)$ and $\alpha'=\frac12(e_1+e_2-e_3-e_4)$ is a root of $\mathfrak{h}$ with length $1$. Then the roots $\gamma_1=e_1+e_5$ and $\gamma_2=e_1-e_5$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. \section{Case III: the exceptional groups and summary} We continue the case by case discussion of the last section, and summarize all the results of these two sections as a theorem at the end, which is one of the main results of this paper. \subsection{The case $\mathfrak{g}=E_6$} Without losing generality, we can assume that the orthogonal pair of the roots $\alpha$ and $\beta$ are of the form $\pm e_i\pm e_j$. Up to the Weyl group action induced by $D_5$, there are two subcases: (1) $\alpha=e_1+e_2$ and $\beta=e_2-e_1$; (2) $\alpha=e_1+e_2$ and $\beta=-e_3-e_4$. Using the outer automorphisms of $E_6$ as well as the Weyl group action, the second subcase can be reduced to the first one. So we can assume that $\alpha=e_1+e_2$ and $\beta=e_2-e_1$. Then $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}e_1$, and $\alpha'=e_2$ is a root of $\mathfrak{h}$. Then the roots $$ \gamma_1= -\frac12 e_1+\frac12 e_2+\frac 12 e_3+\frac 12 e_4+\frac12 e_5+\frac{\sqrt{3}}{2}e_6, $$ and $$\gamma_2= -\frac12 e_1-\frac12 e_2-\frac 12 e_3-\frac 12 e_4-\frac12 e_5+\frac{\sqrt{3}}{2}e_6$$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. \subsection{The case $\mathfrak{g}=E_7$} Given an orthogonal pair of roots $\alpha$ and $\beta$ of $\mathfrak{g}$, we can use certain Weyl group action to change $\beta$ to $\sqrt{2}e_7$. Since $\beta$ is orthogonal to $\alpha$, $\alpha$ must be then of the form $\pm e_i\pm e_j$ with $1\leq i<j\leq 6$. Using Weyl group actions induced by $D_6$, we can change $\alpha$ to $e_1+e_2$ while keeping $\beta=\sqrt{2}e_7$ fixed. So essentially there is only one subcase, namely, $\alpha=e_1+e_2$ and $\beta=e_2-e_1$. Then $\mathfrak{t}\cap \mathfrak{m}=\mathbb{R}e_1$ and $\alpha'=e_2$ is a root of $\mathfrak{h}$. Now the pair of roots $$ \gamma_1= -\frac12 e_1+\frac12 e_2+\frac12 e_3+\frac12 e_4 +\frac12 e_5+\frac12 e_6 +\frac{\sqrt{2}}{2}e_7,$$ and $$\gamma_2= \frac12 e_1-\frac12 e_2-\frac12 e_3-\frac12 e_4 +\frac12 e_5+\frac12 e_6 +\frac{\sqrt{2}}{2}e_7$$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. \subsection{The case $\mathfrak{g}=E_8$} Up to the Weyl group action, we can assume that $\alpha$ and $\beta$ are of the form $\pm e_i\pm e_j$. We only need to consider the following two subcases. {\bf Subcase 1.}\quad $\alpha=e_1+e_2$, $\beta=e_2-e_1$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}e_1$ and $\alpha'=e_2$ is a root of $\mathfrak{h}$. The the pair of roots $$ \gamma_1 = \frac12 e_1+\frac12 e_2+\frac12 e_3+\frac12 e_4+ \frac12 e_5+\frac12 e_6+\frac12 e_7+\frac12 e_8,$$ and $$\gamma_2=-\frac12 e_1-\frac12 e_2-\frac12 e_3-\frac12 e_4+ \frac12 e_5+\frac12 e_6+\frac12 e_7+\frac12 e_8$$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. {\bf Subcase 2.} $\alpha=e_1+e_2$ and $\beta=-e_3-e_4$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2+e_3+e_4)$ and $\alpha'=\frac12(e_1+e_2-e_3-e_4)$ is a root of $\mathfrak{h}$. Then the pair of roots $\gamma_1=e_1+e_5$ and $\gamma_2=e_2+e_6$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. \subsection{The case $\mathfrak{g}=F_4$} Notice that up to the Weyl group action, any short root of $F_4$ can be changed to $e_1$. This implies that any orthogonal pair of short roots of $F_4$ can be changed to the pairs $e_1$ and $-e_2$. On the other hand, using the reflections induced by the roots $\frac12(\pm e_1\pm e_2\pm e_3\pm e_4)$, any orthogonal pair of long roots can be changed to the pair $e_1\pm e_2$. Hence we only need to consider the following subcases. {\bf Subcase 1.}\quad $\alpha=e_1+e_2$, $\beta=e_2$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}e_1$ and $\alpha'=e_2$ is a root of $\mathfrak{h}$. Let $\mathfrak{t}'$ be the subalgebra of $\mathfrak{t}\cap\mathfrak{h}$ spanned by $e_3$ and $e_4$, and $T'$ be the closed sub-torus in $T\cap H$ with $\mathrm{Lie}(T')=\mathfrak{t}'$. Then applying Lemma \ref{totally-geodesic-lemma} to $T'$, we conclude that there should be a positively curved reversible homogeneous Finsler space $\mathrm{SO}(5)/\mathrm{SO}(3)$ in Subcase 1 for $B_n$, which is a contradiction. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. {\bf Subcase 2.}\quad $\alpha=e_1+e_2$, $\beta=e_2-e_1$. This subcase has been covered by the previous one. {\bf Subcase 3.}\quad $\alpha=e_1+e_2$, $\beta=-e_3$. In this case, $\mathfrak{t}\cap \mathfrak{m}=\mathbb{R}(e_1+e_2+e_3)$ and $\alpha'=\frac13 e_1+\frac13 e_2- \frac23 e_3$ is a root of $\mathfrak{h}$ with length $\sqrt{\frac23}$, with $\mathfrak{h}_{\pm\alpha'}\subset \hat{\mathfrak{g}}_{\pm\alpha'}=\mathfrak{g}_{\pm(e_1+e_2)}+\mathfrak{g}_{\pm e_3}$. By Lemma \ref{key-lemma-1}, $\pm e_4$ are roots of $\mathfrak{h}$, and $\mathfrak{h}_{\pm e_4}=\mathfrak{g}_{\pm e_4}=\hat{\mathfrak{g}}_{\pm e_4}$. Notice that $\mathrm{pr}_\mathfrak{h}(e_4-e_3)$ is not orthogonal to $\alpha'$ and has length $\sqrt{\frac53}$. So $\mathrm{pr}_\mathfrak{h}(e_4-e_3)$ is not a root of $\mathfrak{h}$. Thus $\mathfrak{g}_{\pm(e_4-e_3)}\subset\mathfrak{m}$. Therefore we have $$\mathfrak{g}_{\pm e_3}=[\mathfrak{g}_{\pm e_4},\mathfrak{g}_{\pm (e_4-e_3)}]\subset\mathfrak{m}.$$ Hence $\mathfrak{h}_{\pm\alpha'}=\mathfrak{g}_{\pm(e_1+e_2)}$. Then we have $$\alpha'=\frac13 e_1+\frac13 e_2-\frac23 e_3 \subset[\mathfrak{h}_{\pm\alpha'},\mathfrak{h}_{\pm\alpha'}] =[\mathfrak{g}_{\pm(e_1+e_2)},\mathfrak{g}_{\pm(e_1+e_2)}]=\mathbb{R}(e_1+e_2),$$ which is a contradiction. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. {\bf Subcase 4.}\quad $\alpha=e_1$, $\beta=-e_2$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+e_2)$ and $\alpha'=\frac12(e_1-e_2)$ is a root of $\mathfrak{h}$. By Lemma \ref{key-lemma-2}, $e_1-e_2=2\alpha'$ is also a root of $\mathfrak{h}$, which is a contradiction. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. {\bf Subcase 5.}\quad $\alpha=e_1+e_2$, $\beta=-e_2$. In this case, $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(e_1+2e_2)$, and $\alpha'=\frac25 e_1-\frac15 e_2$ is a root of $\mathfrak{h}$ of length $\sqrt{\frac15}$, with $\mathfrak{h}_{\pm\alpha'}\subset \hat{\mathfrak{g}}_{\pm\alpha'}=\mathfrak{g}_{\pm(e_1+e_2)}+\mathfrak{g}_{\pm e_2}$. By Lemma \ref{key-lemma-1}, $e_3$ is a root of $\mathfrak{h}$ and $\mathfrak{h}_{\pm e_3}=\mathfrak{g}_{\pm e_3}=\hat{\mathfrak{g}}_{\pm e_3}$. The vector $\mathrm{pr}_\mathfrak{h}(e_2+e_3)$ is not a root of $\mathfrak{h}$, since it is not orthogonal to $\alpha'$ and its length is $\sqrt{\frac65}$. So $\mathfrak{g}_{\pm(e_2+e_3)}\subset\mathfrak{m}$. Then we have $$\mathfrak{g}_{\pm e_2}=[\mathfrak{g}_{\pm(e_2+e_3)},\mathfrak{g}_{\pm e_3}]\subset\mathfrak{m}.$$ This implies that $\mathfrak{h}_{\pm\alpha'}=\mathfrak{g}_{\pm(e_1+e_2)}$. Then we can deduce a contradiction by a similar argument as in Subcase 3 of this section. There is another way to deduce the contradiction. Let $\mathfrak{t}'=\mathbb{R}e_4$ and $T'$ be the corresponding closed one-parameter subgroup in $H$. Using Lemma \ref{totally-geodesic-lemma}, we get a positively curved reversible homogeneous Finsler space in Subcase 9 for $B_n$, which is impossible. \subsection{The case $\mathfrak{g}=G_2$} If the angle between $\alpha$ and $\beta$ is $\frac \pi 6$ or $\frac \pi 2$, we can find a pair of short roots $\alpha_1$ and $\beta_1$ of $\mathfrak{g}$, such that the angle between $\alpha_1$ and $\beta_1$ is $\frac \pi 3$, and $\alpha'=\mathrm{pr}_\mathfrak{h}({\alpha}_1)= \mathrm{pr}_\mathfrak{h}({\beta}_1)$ is a root of $\mathfrak{h}$. This is for a contradiction to Lemma \ref{lemma-999}. Therefore we only need to consider the case that $\alpha$ is a long root, $\beta$ is a short root, and the angle between them is $\frac{5\pi}6$, $\alpha'=\mathrm{pr}_\mathfrak{h}(\alpha)=\mathrm{pr}_{\mathfrak{h}} (\beta)$ is a root of $\mathfrak{h}$, and $\mathfrak{h}$ is of type $A_1$. Let $\gamma_1=\alpha+3\beta$ and $\gamma_2=\alpha+\beta$. Select any two nonzero vectors $u\in\mathfrak{g}_{\pm\gamma_1}$ and $v\in\mathfrak{g}_{\pm\gamma_2}$. Then it is not hard to see that the long root $\gamma_1$ and the short root $\gamma_2$ are orthogonal to each other, and none of $\gamma_1\pm\gamma_2$ is a root of $\mathfrak{g}$. So $u$ and $v$ are linearly independent and commutative. Denote the anticlockwise rotation with angle $\theta$ as $R(\theta)$. Then there exists $g\in T_H$, and suitable orthonormal bases for each of the subspaces of $\mathfrak{m}$ below, such that \begin{eqnarray*} \mathrm{Ad}(g)|_{\mathfrak{t}\cap\mathfrak{m}}&=&\mathrm{Id},\\ \mathrm{Ad}(g)|_{\hat{\mathfrak{g}}_{\pm\alpha'}\cap\mathfrak{m}} &=& R(\pi/4),\\ \mathrm{Ad}(g)|_{\mathfrak{g}_{\pm(\alpha+\beta)}= \hat{\mathfrak{g}}_{\pm 2\alpha'}}&=& R(\pi/2),\\ \mathrm{Ad}(g)|_{\mathfrak{g}_{\pm(\alpha+2\beta)}= \hat{\mathfrak{g}}_{\pm 3\alpha'}}&=& R(3\pi/4),\\ \mathrm{Ad}(g)|_{\mathfrak{g}_{\pm(\alpha+3\beta)}= \hat{\mathfrak{g}}_{\pm 4\alpha'}}&=& R(\pi)=-\mathrm{Id},\\ \mathrm{Ad}(g)|_{\mathfrak{g}_{\pm(2\alpha+3\beta)}= \hat{\mathfrak{g}}_{\pm 5\alpha'}}&=& R(5\pi/4). \end{eqnarray*} Denote the above subspaces as $\mathfrak{m}_k$, $k=0,1,\ldots,5$, i.e. the action of $\mathrm{Ad}(g)$ on $\mathfrak{m}_k$ is equal to $R(k\pi/4)$. In particular, $\mathfrak{m}_0=\mathfrak{t}\cap\mathfrak{m}$, $\mathfrak{m}_2=\mathfrak{g}_{\pm\gamma_2}$ and $\mathfrak{m}_4=\mathfrak{g}_{\pm\gamma_1}$. By Lemma \ref{lemma-3-6}, we have \begin{equation}\label{3801} \langle\mathfrak{m}_4,\mathfrak{m}_i\rangle_u^F=0, \quad\forall i\neq 4. \end{equation} For any $v'\in\mathfrak{m}_2$ and $w'\in\mathfrak{m}_i$ with $i\neq 2$, we have \begin{eqnarray*} \langle v',w'\rangle_u^F&=& \langle\mathrm{Ad}(g)v',\mathrm{Ad}(g)w'\rangle_{\mathrm{Ad}(g)u}^F= \langle R(\pi/2)v',R(i\pi/4)w'\rangle_{-u}^F\\ &=& \langle R(\pi/2)v',R(i\pi/4)w'\rangle_{u}^F =\langle R(\pi/2)^2 v', R(i\pi/4)^2 w'\rangle_u^F\\ &=&\langle -v',R(i\pi/2)w'\rangle_u^F =\langle v',R((i-2)\pi/2)w'\rangle_u^F. \end{eqnarray*} Using a similar argument as in the proof of Lemma \ref{lemma-3-6}, we get \begin{equation}\label{3800} \langle \mathfrak{m}_2,\mathfrak{m}_i\rangle_u^F=0,\quad\forall i\neq 2. \end{equation} By the $\mathrm{Ad}(T_H)$-invariance, the Minkowski norm $F|_{\mathfrak{m}_4}$ coincides with the restriction of the bi-invariant inner product up to a scalar change. Thus \begin{equation}\label{3799} \langle[u,\mathfrak{t}],u\rangle_u^F=0. \end{equation} Now a direct calculation shows that \begin{equation*} [u,\mathfrak{m}]_{\mathfrak{m}}\subset \mathfrak{m}_0+ \mathfrak{m}_1+\mathfrak{m}_3+[u,\mathfrak{t}]+\mathfrak{m}_5, \end{equation*} and \begin{equation*} [v,\mathfrak{m}]_{\mathfrak{m}}\subset \mathfrak{m}_0+\mathfrak{m}_1+\mathfrak{m}_2+ \mathfrak{m}_3+\mathfrak{m}_5. \end{equation*} So by (\ref{3801}), (\ref{3800}) and (\ref{3799}), we have \begin{equation} \langle[u,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F= \langle[u,\mathfrak{m}]_\mathfrak{m},v\rangle_u^F= \langle[v,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F=0. \end{equation} Hence $U(u,v)=0$. Then by Theorem \ref{flag-curvature-formula-thm}, we get $K^F(o,u,u\wedge v)=0$, which is a contradiction. Hence there does not exists any invariant reversible Finsler metric on the corresponding coset space with positive flag curvature. \subsection{Summary} We now summarize all the results in Section 4 and Section 5 as the following theorem, which gives a complete classification of odd dimensional positively curved reversible homogeneous Finsler spaces in Case III. \begin{theorem} \label{mainthm-part-1} Let $(G/H,F)$ be an odd dimensional positively curved reversibly homogeneous Finsler space of Case III, i.e., with respect to a bi-invariant orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ for the compact Lie algebra $\mathfrak{g}$, and a fundamental Cartan subalgebra $\mathfrak{t}$, there are roots $\alpha$ and $\beta$ of $\mathfrak{g}$ from the same simple factor, such that $\alpha\neq\pm\beta$ and $\mathrm{pr}_\mathfrak{h}(\alpha)=\mathrm{pr}_\mathfrak{h}(\beta)=\alpha'$ is a root of $\mathfrak{h}$. Then $(G/H,F)$ is equivalent to one of the following the homogeneous Finsler spaces: \begin{description} \item{\rm (1)}\quad The odd dimensional Riemannian symmetric spheres $S^{2n-1}=\mathrm{SO}(2n)/\mathrm{SO}(2n-1)$ with $n>2$; \item{\rm (2)}\quad The homogeneous spheres $S^7=\mathrm{Spin}(7)/\mathrm{G}_2$ and $S^{15}=\mathrm{Spin}(9)/\mathrm{Spin}(7)$; \item{\rm (3)}\quad Berger's spaces $\mathrm{SU}(5)/\mathrm{Sp}(2)\mathrm{U}(1)$ and $\mathrm{Sp}(2)/\mathrm{SU}(2)$. \end{description} \end{theorem} \section{The Cases II and I} In this section we will consider odd dimensional positively curved reversible homogeneous Finsler spaces in Cases II and I. \subsection{The Case II} Let $(G/H,F)$ be an odd dimensional positively curved reversible homogeneous Finsler space in Case II, i.e., with respect to a bi-invariant orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ for the compact Lie algebra $\mathfrak{g}=\mathrm{Lie}(G)$ and a fundamental Cartan subalgebra $\mathfrak{t}$, there exists two roots $\alpha$ and $\beta$ of $\mathfrak{g}$ from different simple factors such that $\mathrm{pr}_\mathfrak{h}(\alpha)=\mathrm{pr}_{\mathfrak{h}}(\beta)=\alpha'$ is a root of $\mathfrak{h}$. In this situation $\alpha'$ is a linear combination of $\alpha$ and $\beta$ with two nonzero coefficients. Thus $\mathfrak{h}_{\pm\alpha'}\subset \hat{\mathfrak{g}}_{\pm\alpha'}= \mathfrak{g}_{\pm\alpha}+\mathfrak{g}_{\pm\beta}$ can not be a root plane of $\mathfrak{g}$, or equivalently, $\mathfrak{g}_{\pm\alpha}$ and $\mathfrak{g}_{\pm\beta}$ are not contained in $\mathfrak{h}$ or $\mathfrak{m}$. First of all, we can find a direct sum decomposition $$\mathfrak{g}=\mathfrak{g}_1\oplus\cdots\oplus\mathfrak{g}_n\oplus\mathbb{R}^m,$$ such that each $\mathfrak{g}_i$ is a simple ideal of $\mathfrak{g}$, and $\alpha$ and $\beta$ are roots of $\mathfrak{g}_1$ and $\mathfrak{g}_2$, respectively. Since $\mathfrak{t}\cap\mathfrak{m}=\mathbb{R}(\alpha-\beta) \subset\mathfrak{g}_1\oplus\mathfrak{g}_2$, the abelian factor of $\mathfrak{g}$ and $\mathfrak{t}\cap\mathfrak{g}_i$ for each $i>2$ are contained in $\mathfrak{t}\cap\mathfrak{h}$. It is also obvious that for each root $\gamma$ of $\mathfrak{g}$ with $\gamma\neq\pm\alpha$, $\gamma\neq\pm\beta$ and $\mathrm{pr}_{\mathfrak{h}}(\gamma)=\gamma'$, $\mathfrak{g}_{\pm\gamma}=\hat{\mathfrak{g}}_{\pm\gamma'}$ is contained either in $\mathfrak{h}$ or in $\mathfrak{m}$. Now we prove that for any $i>2$, $\mathfrak{g}_i$ is contained in $\mathfrak{t}\cap\mathfrak{h}$. Since the simple factor $\mathfrak{g}_i$ can be algebraically generated by its root planes, we only need to prove that each root plane of $\mathfrak{g}_i$ is contained in $\mathfrak{h}$. Let $\gamma$ be a root of $\mathfrak{g}_i$. Since $i>2$, $\gamma$ is contained in $\mathfrak{t}\cap\mathfrak{h}$ and it is the only root of $\mathfrak{g}$ in $\gamma+(\mathfrak{t}\cap\mathfrak{m})$. By Lemma \ref{key-lemma-1}, $\gamma$ is a root of $\mathfrak{h}$ and $\mathfrak{g}_{\pm\gamma}=\hat{\mathfrak{g}}_{\pm\gamma}= \mathfrak{h}_{\pm\gamma}\subset\mathfrak{h}$. Consider the roots of $\mathfrak{g}_1$ and $\mathfrak{g}_2$. Up to equivalence, we can assume that $\mathfrak{g}=\mathfrak{g}_1\oplus\mathfrak{g}_2$. Let $\gamma$ be a root of $\mathfrak{g}_1$ such that $\gamma\neq\pm\alpha$. Since it is the only root of $\mathfrak{g}_1$ contained in $\gamma+(\mathfrak{t}\cap\mathfrak{m})$, by Lemma \ref{key-lemma-1}, if $\gamma\in\mathfrak{t}\cap\mathfrak{h}$, then $\mathfrak{g}_{\pm\gamma} \subset\mathfrak{h}$. On the other hand, if $\gamma$ is not bi-invariant orthogonal to $\alpha$, then by Lemma \ref{trick-lemma-0}, $\mathfrak{g}_{\pm\gamma}\subset\mathfrak{m}$. The similar assertion is valid for any root of $\mathfrak{g}_2$. Now we claim that there does not exist two roots $\gamma_1$ and $\gamma_2$ of $\mathfrak{g}_1$ and $\mathfrak{g}_2$, respectively, such that their root planes are contained in $\mathfrak{m}$. In fact, otherwise we will have $\gamma_1\neq \pm\alpha$ and $\gamma_2\neq\pm\beta$. Then $\gamma_1$ and $\gamma_2$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}, which is impossible. Without loss of generality, we can assume that all roots of $\mathfrak{g}_1$ other than $\pm\alpha$ are roots of $\mathfrak{h}$. Thus they are bi-invariant orthogonal to $\pm\alpha$. Since $\mathfrak{g}_1$ is simple, $\mathfrak{g}_1$ is of type $A_1$ with the only roots $\pm\alpha$. Now we consider $\mathfrak{g}_2$. We first prove the following lemma \begin{lemma} \label{lemma-6-1} Keep the above assumptions and notation. Then there does not exist a pair of roots $\gamma_1$ and $\gamma_2$ of $\mathfrak{g}_2$ satisfying the following conditions: \begin{description} \item{\rm (1)}\quad $\gamma_1\neq\pm\gamma_2$, $\gamma_1\neq\pm\beta$ and $\gamma_2\neq\pm\beta$; \item{\rm (2)}\quad Neither $\gamma_1$ nor $\gamma_2$ is a root of $\mathfrak{h}$; \item{\rm (3)}\quad None of $\gamma_1\pm\gamma_2$ is a root of $\mathfrak{g}$. \end{description} \end{lemma} \begin{proof} Assume conversely that there are two roots $\gamma_1$ and $\gamma_2$ of $\mathfrak{g}_2$ satisfying (1)-(3) of the lemma. Then it is easy to see that $\gamma_1$ is the only root in $\gamma_1+\mathbb{R}(\alpha-\beta)$. On the other hand, if there exist some real numbers $t_1$ and $t_2$, such that $\gamma_3=\gamma_2+t_1\gamma_1+t_2(\alpha-\beta)$ is a root of $\mathfrak{g}$ other than $\gamma_2$, then we have $t_2\in\{-1,0,1\}$. If $t_2=0$, then $\gamma_3=\gamma_2+t_1\gamma_1$, with $t_1\neq 0$, is a root of $\mathfrak{g}_2$. This is impossible, Since $\gamma_1\pm\gamma_2$ are not roots of $\mathfrak{g}_2$. If $t_2=\pm 1$ then $\pm\beta=t_1\gamma_1+\gamma_2$ is a root of $\mathfrak{g}_2$ other than $\gamma_2$. Similarly we can get a contradiction. This implies that the pair of roots $\gamma_1$ and $\gamma_2$ satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}, which is a contradiction. \end{proof} Let $\mathfrak{k}$ be the subalgebra of $\mathfrak{g}_2$ generated by $\mathfrak{g}_{\pm\beta}$ and $\mathfrak{h}\cap\mathfrak{g}_2$. It has the same rank as $\mathfrak{g}_2$ and can be decomposed as a direct sum $\mathfrak{k}=A_1\oplus(\mathfrak{h}\cap\mathfrak{g}_2)$, in which the $A_1$-factor is generated by $\mathfrak{g}_{\pm\beta}$. By Lemma \ref{lemma-6-1}, the pair $(\mathfrak{g}_2,\mathfrak{k})$ satisfies the condition (A) in \cite{Wallach1972}. Then by Proposition 6.1 of \cite{Wallach1972}, the pair $(\mathfrak{g}_2,\mathfrak{k})$ must be one of the following: $$((A_1,A_1), (A_2,A_1\oplus\mathbb{R})\mbox{ or } (C_n,A_1\oplus C_{n-1}).$$ Correspondingly, the pair $(\mathfrak{g},\mathfrak{h})$ must be one of the following: $$(A_1\oplus A_1,A_1), (A_1\oplus A_2,A_1\oplus\mathbb{R}) \mbox{ or }(A_1\oplus C_n, A_1\oplus C_{n-1}),$$ in which the $A_1$-factor in $\mathfrak{h}$ is the diagonal subalgebra. Thus the corresponding homogeneous Finsler space is equivalent to the symmetric homogeneous sphere $S^3=\mathrm{SO}(4)/\mathrm{SO}(3)$, or the Wilking's space $\mathrm{SU}(3)\times\mathrm{SO}(3)/\mathrm{U}(2)$ (which coincides with the Aloff-Wallach's space $S_{1,1}$, see \cite{AW75} and \cite{Wi1999}), or the homogeneous sphere $S^{4n-1}=\mathrm{Sp}(n)\mathrm{Sp}(1)/\mathrm{Sp}(n-1)\mathrm{Sp}(1)$. To summarize, we have the following theorem, which gives a complete classification of odd dimensional positively curved reversible homogeneous Finsler spaces in Case II. \begin{theorem}\label{mainthm-part-2} Let $(G/H,F)$ be an odd dimensional positively curved reversibly homogeneous Finsler space of Case II, i.e., with respect to a bi-invariant orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ for the compact Lie algebra $\mathfrak{g}=\mathrm{Lie}(G)$ and a fundamental Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{g}$, there are roots $\alpha$ and $\beta$ of $\mathfrak{g}$ from different simple factors such that $\mathrm{pr}_\mathfrak{h}(\alpha)=\mathrm{pr}_\mathfrak{h}(\beta)=\alpha'$ is a root of $\mathfrak{h}$. Then $(G/H,F)$ is equivalent to one of the following homogeneous Finsler spaces: \begin{description} \item{\rm (1)}\quad The symmetric homogeneous sphere $S^{3}=\mathrm{SO}(4)/\mathrm{SO}(3)$; \item{\rm (2)}\quad The homogeneous spheres $\mathrm{Sp}(n)\mathrm{Sp}(1)/\mathrm{Sp}(n-1)\mathrm{Sp}(1)$; \item{\rm (3)}\quad The Wilking's space $\mathrm{SU}(3)\times\mathrm{SO}(3)/\mathrm{U}(2)$. \end{description} \end{theorem} \subsection{The Case I} Let $(G/H,F)$ be an odd dimensional positively curved reversible homogeneous Finsler space in Case I, i.e., with respect to a bi-invariant orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ for the compact Lie algebra $\mathfrak{g}=\mathrm{Lie}(G)$ and a fundamental Cartan subalgebra $\mathfrak{t}$, each root plane of $\mathfrak{h}$ is also a root plane of $\mathfrak{g}$. Keep all the relevant notation as before. The root system of $\mathfrak{h}$ is then a subset of the root system of $\mathfrak{g}$, that is, $\Delta_\mathfrak{h}\subset\Delta_\mathfrak{g}\cap\mathfrak{h}$. For each root $\alpha$ of $\mathfrak{g}$, we have either $\mathfrak{g}_{\pm\alpha}=\mathfrak{h}_{\pm\alpha}\subset\mathfrak{h}$ or $\mathfrak{g}_{\pm\alpha}\subset\mathfrak{m}$. Suppose $\mathfrak{g}$ has the following direct sum decomposition: \begin{equation}\label{3950} \mathfrak{g} =\mathfrak{g}_0\oplus\mathfrak{g}_1\oplus\cdots\oplus\mathfrak{g}_n, \end{equation} where $\mathfrak{g}_0$ is abelian and each $\mathfrak{g}_i$, $1\leq i\leq n$, is a simple ideal. Given a nonzero vector $w$ in $\mathfrak{t}\cap\mathfrak{m}$, let $w=w_0+\cdots+w_n$ be the decomposition of $w$ with respect to (\ref{3950}). Then it follows from Lemma \ref{key-lemma-1} that $\mathfrak{g}_i$ is contained in $\mathfrak{h}$ if and only if $w_i=0$, for any $w\in \mathfrak{t}\cap\mathfrak{m}$. Now we have the following cases: {\bf Case 1.}\quad There exists $w\in \mathfrak{t}\cap\mathfrak{m}$ such that $w_0\neq 0$. We first assert that if $\alpha$ and $\beta$ are two roots of $\mathfrak{g}$, such that none of them is a root of $\mathfrak{h}$, then at least one of the roots $\alpha\pm\beta$ is a root of $\mathfrak{g}$. In fact, otherwise the pair of roots $\alpha, \beta$ will satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}, which is a contradiction. Now let $\mathfrak{k}$ be the subalgebra generated by $\mathfrak{h}$ and $\mathfrak{t}$. Then we have $\mathfrak{k}=\mathfrak{h}\oplus(\mathfrak{t}\cap\mathfrak{m})$. Let $K$ be a closed subgroup of $G$ with $\mathrm{Lie}(K)=\mathfrak{k}$. Then we have $\mathrm{rk}K=\mathrm{rk}G$. This implies that the pair $(\mathfrak{g},\mathfrak{k})$ satisfies the Condition (A) in \cite{Wallach1972}. Thus we can suppose that in the decomposition (\ref{3950}) of $\mathfrak{g}$, the following equation holds: $$\mathfrak{k}=\mathfrak{g}_0\oplus\mathfrak{k}_1\oplus \mathfrak{g}_2\oplus\cdots\oplus\mathfrak{g}_n,$$ where $$(\mathfrak{g}_1,\mathfrak{k}_1)=(A_n,A_{n-1}\oplus\mathbb{R}), (C_n,C_{n-1}\oplus\mathbb{R}),\mbox{ or } (A_2,\mathbb{R}\oplus\mathbb{R}).$$ Notice that in other spaces of Wallach's list, the abelian factor required for this situation does not appear. If $\mathfrak{g}_1=A_2$, then by Lemma \ref{key-lemma-1}, no root of $\mathfrak{g}_1$ can be contained in $\mathfrak{t}\cap\mathfrak{h}$. Thus $(G/H,F)$ is equivalent to one of the following: \begin{description} \item{\rm (1)}\quad The homogeneous sphere $$ S^{2n-1}=\mathrm{U}(n)/\mathrm{U}(n-1) \mbox{ or } S^{4n-1}=\mathrm{Sp}(n)\mathrm{U}(1)/\mathrm{Sp}(n-1)\mathrm{U}(1)\mbox{ for }n>1;$$ \item{\rm (2)}\quad The $\mathrm{U}(3)$-homogeneous presentations of Aloff-Wallach's spaces $S_{k,l}=\mathrm{U}(3)/T^2$, in which $T^2$ is a two dimensional torus of diagonal matrices which does not contain the center of $\mathrm{U}(3)$ and $$T^2\cap\mathrm{SU}(3)=U_{k,l}= \{\mathrm{diag}(z^k,z^l,z^{-k-l})|z\in\mathbb{C},|z|=1\},$$ where $k$ and $l$ are integers satisfying $kl(k+l)\neq 0$. \end{description} Notice that the $\mathrm{SU}(3)$-homogeneous space $S_{k,l}$ have infinitely many different presentation as $\mathrm{U}(2)$-homogeneous spaces; See \cite{AW75}. \medskip {\bf Case 2.} There exists $w\in \mathfrak{t}\cap\mathfrak{m}$ with decomposition $w=w_1+w_2$, where both $w_1$ and $w_2$ are nonzero. Up to equivalence, we can assume that $\mathfrak{g}=\mathfrak{g}_1\oplus\mathfrak{g}_2$. We first assert that there does not exist a root $\alpha$ of $\mathfrak{g}_1$, and a root $\beta$ of $\mathfrak{g}_2$ such that $\alpha \notin\mathbb{R}w_1$, $\beta \notin\mathbb{R}w_2$, and none of them is a root of $\mathfrak{h}$. In fact, otherwise the pair of roots $\alpha$ and $\beta$ will satisfy (1)-(4) of Lemma \ref{key-lemma-2}, which is a contradiction. Without loss of generality, we can assume that all the roots of $\mathfrak{g}_1$ outside $\mathbb{R}w_1$ are roots of $\mathfrak{h}$, i.e., they are contained in $\mathfrak{t}\cap\mathfrak{h}$. By the simpleness of $\mathfrak{g}_1$, we must have $\mathfrak{g}_1=A_1$, and the only roots in $\mathfrak{t}\cap\mathfrak{g}_1=\mathbb{R}w_1$ are $\pm\alpha$. There are two subcases: {\bf Subcase 1.}\quad There exists a root $\beta$ of $\mathfrak{g}_2$ contained in $\mathbb{R}w_2$. Obviously neither $\alpha$ nor $\beta$ is a root of $\mathfrak{h}$, i.e., their root planes are contained in $\mathfrak{m}$. Let $\mathfrak{t}'$ be the bi-invariant orthogonal complement of $w_2$ in $\mathfrak{g}_2$ and $T'$ be the corresponding torus in $H$. Using Lemma \ref{totally-geodesic-lemma} for $T'$, we get a positively curved reversible homogeneous Finsler space $\mathrm{SU}(2)\times\mathrm{SU}(2)/\mathrm{U}(1)$ in Case I, in which $\mathrm{U}(1)$ is not contained in any of the simple factors. To prove the reversible homogeneous space $G/H$ can not be positively curved in this subcase, we only need to consider the situation that $\mathfrak{g}_2=A_1$, and the only roots are $\pm\beta$. Fix a bi-invariant inner product on $\mathfrak{g}=\mathfrak{g}_1\oplus\mathfrak{g}_2= A_1\oplus A_1$ such that its restriction on each factor has the same scale. By suitably re-ordering the two simple factors, we can assume that $\alpha+c\beta\in\mathfrak{t}\cap\mathfrak{m}$ with $|c|\geq 1$. Denote $\alpha'=\mathrm{pr}_\mathfrak{h}(\alpha)$ and $\beta'=\mathrm{pr}_\mathfrak{h}(\beta)$. Then the above assumption implies that $\beta'$ is not an even multiple of $\alpha'$. Let $\{u,u'\}$ be a bi-invariant orthonormal basis of $\mathfrak{g}_{\pm\alpha}$, and $v$ a nonzero vector in $\mathfrak{g}_{\pm\beta}$ such that $\langle u',v\rangle_u^F=0$. Obviously $u$ and $v$ are linearly independent and commutative. By the $\mathrm{Ad}(H)$-invariance, the Minkowski norm $F|_{\mathfrak{g}_{\pm\alpha}}$ coincides with the bi-invariant inner product up to scalar changes. Thus \begin{equation*} \langle u',u\rangle_u^F=\langle[\mathfrak{t},u],u\rangle_u^F=0. \end{equation*} By the assumption and Lemma \ref{lemma-3-6}, we have \begin{equation}\label{3970} \langle \mathfrak{t}\cap\mathfrak{m},u\rangle_u^F= \langle \mathfrak{t}\cap\mathfrak{m},v\rangle_u^F=0. \end{equation} Then a direct calculation shows that \begin{equation} [u,\mathfrak{m}]_\mathfrak{m}=\mathfrak{t}\cap\mathfrak{m}+[\mathfrak{t},u]. \end{equation} So by (\ref{3970}), we get \begin{equation}\label{3960} \langle[u,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F= \langle\mathfrak{t}\cap\mathfrak{m},u\rangle_u^F+ \langle\mathbb{R}u',u\rangle_u^F=0, \end{equation} and \begin{equation}\label{3961} \langle[u,\mathfrak{m}]_\mathfrak{m},v\rangle_u^F= \langle\mathfrak{t}\cap\mathfrak{m},v\rangle_u^F+ \langle\mathbb{R}u',v\rangle_u^F=0. \end{equation} Now a direct calculation shows that \begin{equation}\label{3850} [v,\mathfrak{m}]_\mathfrak{m}=\mathfrak{t}\cap\mathfrak{m}+ [\mathfrak{t}\cap\mathfrak{m},v]=\mathfrak{t}\cap\mathfrak{m}+ [\mathfrak{t}\cap\mathfrak{h},v]. \end{equation} For any $w'\in\mathfrak{t}\cap\mathfrak{h}$, we have, by Theorem 3.1 of \cite{DH2004}, $$\langle[w',v],u\rangle_u^F=-\langle v,[w',u]\rangle_u^F-2C^F_u([w',u],v,u)=0.$$ So by Lemma \ref{lemma-3-6} and (\ref{3850}), we have \begin{equation}\label{3962} \langle[v,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F= \langle[v,\mathfrak{t}\cap\mathfrak{h}],u\rangle_u^F=0. \end{equation} Taking the summation of (\ref{3960}), (\ref{3961}) and (\ref{3962}), we get $U(u,v)=0$. Hence by Theorem \ref{flag-curvature-formula-thm}, $K^F(o,u,u\wedge v)=0$, which is a contradiction. Hence the corresponding coset space does not admit any invariant Finsler metric with positive flag curvature. \medskip {\bf Subcase 2.}\quad There does not exist any root of $\mathfrak{g}_2$ in $\mathbb{R}w_2$. Then by the simpleness of $\mathfrak{g}_2$, there is a root $\beta$ of $\mathfrak{g}_2$ which is not bi-invariant orthogonal to $w_2$. Let $u$ and $v$ be any nonzero vectors in $\mathfrak{g}_{\pm\alpha}$ and $\mathfrak{g}_{\pm\beta}$ respectively. Then they are linearly independent and commutative. The subalgebra $\mathfrak{t}'=\mathfrak{t}\cap\mathfrak{h}\cap\mathfrak{g}_2$ coincides with $w_2^{\perp}\cap \mathfrak{t}\cap\mathfrak{g}_{2}$, the bi-invariant orthogonal complement of $w_2$ in $\mathfrak{t}\cap\mathfrak{g}_2$. Denote $T'$ the corresponding torus in $H$. Since the inner product $\langle\cdot,\cdot\rangle_u^F$ is $\mathrm{Ad}(T')$-invariant, by Lemma \ref{lemma-3-8}, $\mathfrak{m}$ can be $g_u^F$-orthogonally decomposed as the sum of $\mathfrak{m}'=\hat{\hat{\mathfrak{m}}}_0=\mathfrak{t}\cap\mathfrak{m}+\mathfrak{g}_{\pm\alpha}$ for the trivial irreducible $T'$-representation and $\mathfrak{m}''\subset\mathfrak{g}_2$ for nontrivial irreducible $T'$-representations. Notice that $\mathfrak{m}''$ is the sum of some root planes in $\mathfrak{g}_2$, and $u$ and $v$ are contained in $\mathfrak{m}'$ and $\mathfrak{m}''$, respectively. Now a direct calculation shows that \begin{equation}\label{3900} [u,\mathfrak{m}]_\mathfrak{m}=\mathfrak{t}\cap\mathfrak{m}+ [\mathfrak{t},u]\subset\mathfrak{m}'\mbox{ and }[v,\mathfrak{m}]_{\mathfrak{m}}\subset \mathfrak{t}\cap\mathfrak{m}+\mathfrak{m}''. \end{equation} Moreover, the $\mathrm{Ad}(T_H)$ invariance of $F|_{\mathfrak{g}_{\pm\alpha}}$ implies that $F|_{\mathfrak{g}_{\pm\alpha}}$ coincides with the restriction of a bi-invariant inner product up to scalar changes. Thus we have \begin{equation}\label{3901} \langle[\mathfrak{t},u],u\rangle_u^F= \langle[\mathfrak{t}\cap\mathfrak{h},u],u\rangle_u^F=0. \end{equation} By Lemma \ref{lemma-3-6}, \begin{equation}\label{3902} \langle \mathfrak{t}\cap\mathfrak{m},u\rangle_u^F=0. \end{equation} Taking the summation of (\ref{3900}), (\ref{3901}) and (\ref{3902}), we get $$\langle[u,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F= \langle[u,\mathfrak{m}]_\mathfrak{m},v\rangle_u^F= \langle[v,\mathfrak{m}]_\mathfrak{m},u\rangle_u^F=0.$$ Therefore $U(u,v)=0$. Now by Theorem \ref{flag-curvature-formula-thm}, $K^F(o,u,u\wedge v)=0$. This is a contradiction. Hence the corresponding coset space does not admit any invariant Finsler metric with positive flag curvature. {\bf Case 3.}\quad There exists $w\in \mathfrak{t}\cap\mathfrak{m}$ such that $w=w_1+\cdots+w_m$, where $m>2$ and $w_i$, $1\leq i\leq m$ are all nonzero. If there is a root $\alpha\notin\mathbb{R}w_1$ of $\mathfrak{g}_1$, and a root $\beta\notin\mathbb{R}w_2$ of $\mathfrak{g}_2$ such that they are not roots of $\mathfrak{h}$, then they satisfy the conditions (1)-(4) of Lemma \ref{key-lemma-2}, which is a contradiction. Similarly to the previous case, we can assume that $\mathfrak{g}_1=A_1$. Let $\pm\alpha$ be the only roots of $\mathfrak{g}_1$, then we have $\mathfrak{g}_{\pm\alpha}\subset\mathfrak{m}$. We can also find a root $\beta$ of $\mathfrak{g}_2$ which is not bi-invariant orthogonal to $w_2$, then $\mathfrak{g}_{\pm\beta}\subset\mathfrak{m}$. Let $u$ and $v$ be any nonzero vectors in $\mathfrak{g}_{\pm\alpha}$ and $\mathfrak{g}_{\pm\beta}$ respectively. Notice there does not exist any root which is contained in $\mathbb{R}(w_2+\cdots+w_m)$, thus a similar argument as for Subcase 2 of the previous case can be applied to prove $K^F(o,u,u\wedge v)=0$, which is a contradiction. Hence the corresponding coset space does not admit any invariant reversible Finsler metric with positive flag curvature. The above results can be summarized as the following theorem, which gives a "nearly" complete classification for the coset spaces in Case I which admit invariant Finsler metrics with positive flag curvature. \begin{theorem} \label{mainthm-part-3} Let $(G/H,F)$ be an odd dimensional positively curved reversible homogeneous Finsler space in Case I, i.e., with respect to a bi-invariant orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ of the compact Lie algebra $\mathfrak{g}=\mathrm{Lie}(G)$ and a fundamental Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{g}$, each root plane of $\mathfrak{h}$ is also a root plane of $\mathfrak{g}$. Assume that $(G/H,F)$ is not equivalent to a homogeneous Finsler space $(G'/H',F')$ with compact simple $G'$. Then $(G/H,F)$ must be equivalent to one of the following: \begin{description} \item{\rm (1)}\quad The homogeneous sphere $$ S^{2n-1}=\mathrm{U}(n)/\mathrm{U}(n-1), \mbox{ or } S^{4n-1}=\mathrm{Sp}(n)\mathrm{U}(1)/\mathrm{Sp}(n-1)\mathrm{U}(1),\quad n>1;$$ \item{\rm (2)}\quad The $\mathrm{U}(3)$-homogeneous Aloff-Wallach's spaces. \end{description} \end{theorem} \subsection{Remarks on normal homogeneous and Riemannian manifolds} When we continue the classification for the odd dimensional positively curved reversible homogeneous Finsler spaces $G/H$ in Case I with $G$ simple, we meet both technical and substantial difficulties. However, if we additionally assume $G/H$ to be normal homogeneous or Riemannian, then the classification can be completed. If $(G/H, F)$ is normal homogeneous, then $F$ is subduced by a bi-invariant Finsler metric on $G$. Let $\mathfrak{k}$ be the subalgebra generated by $\mathfrak{h}$ and $\mathfrak{t}$, and $K$ be the closed subgroup of $G$ with $\mathrm{Lie}(K)=\mathfrak{k}$. Then we have $\mathfrak{k}=\mathfrak{h}\oplus(\mathfrak{t}\cap\mathfrak{m})$. The same bi-invariant Finsler metric on $G$ defines another normal homogeneous Finsler metric $\bar{F}$ on $G/K$, such that the natural projection from $G/H$ to $G/K$ is a Finslerian submersion. Since $\dim G/H >1$, $G/K$ is an even dimensional coset space admitting positively curved normal homogeneous Finsler metrics, which has been classified in \cite{XD2014}. From this clue, we can easily find the missing homogeneous spheres $$S^{2n-1}=\mathrm{SU}(n)/\mathrm{SU}(n-1) \mbox{ and } S^{4n-1}=\mathrm{Sp}(n)/\mathrm{Sp}(n-1),\quad\forall n>1.$$ If $(G/H, F)$ is Riemannian, then the metric is induced by an $\mathrm{Ad}(H)$-invariant inner product $\langle\cdot,\cdot\rangle$. The submersion technique described above still works when there does not exist two different roots $\alpha$ and $\beta$ such that $\alpha-\beta\in\mathfrak{t}\cap\mathfrak{m}$. Setting $\beta=-\alpha$, the assumption also implies that there does not exist any root contained in $\mathfrak{t}\cap\mathfrak{m}$. In fact, if $\mathfrak{g}$ is simple, and there is a root contained in $\mathfrak{t}\cap\mathfrak{m}$, then we can always find roots $\alpha$ and $\beta$, such that $\alpha\neq\beta$ and $\alpha-\beta\in\mathfrak{t}\cap\mathfrak{m}$. Notice that the lemmas in Subsection \ref{subsection-key-lemmas} can strengthened in Riemannian geometry. For example, Lemma \ref{key-lemma-2} can be strengthened to the following \begin{lemma} Let $G/H$ be an odd dimensional positively curved Riemannian homogeneous space, with a bi-invariant orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ and a fundamental Cartan subalgebra $\mathfrak{t}$. Then there does not exist two roots $\alpha$ and $\beta$ of $\mathfrak{g}$, satisfying the following conditions: \begin{description} \item{\rm (1)}\quad $\alpha$ and $\beta$ are not roots of $\mathfrak{h}$; \item{\rm (2)}\quad $\alpha\pm\beta$ are not roots of $\mathfrak{g}$; \item{\rm (3)}\quad $\alpha$ is the only roots contained in $\alpha+\mathfrak{t}\cap\mathfrak{m}$; \item{\rm (4)}\quad $\beta$ is the only roots contained in $\beta+\mathfrak{t}\cap\mathfrak{m}$. \end{description} \end{lemma} Assuming that there exist roots $\alpha$ and $\beta$ of $\mathfrak{g}$ such that $\alpha\neq\pm\beta$ and $\alpha-\beta\in\mathfrak{t}\cap\mathfrak{m}$, we can use the $\mathrm{Ad}(H)$-invariance of the inner product $\langle\cdot,\cdot\rangle$ and those strengthened key lemmas, to discuss each possible case. Notice that in the situation of $\mathrm{Sp}(2)/\mathrm{U}(1)$ in \cite{XW2015}, it is positively curved for all commutative pairs, so B. Wilking's method is essential here. The case by case discussion is not hard, but too long to be presented here. See \cite{BB76} for the original proof (with a correction in \cite{XW2015}), or the recent paper \cite{WZ2015} for a much shorter proof. \section{Appendix: the root systems of compact simple Lie algebras} \label{appendix-section} For each simple Lie algebra $\mathfrak{g}$, we present here the Bourbaki description of the root system $\Delta_\mathfrak{g}$, and the root planes in the classical cases. \smallskip (1)\quad The case $\mathfrak{g}=A_n=\mathfrak{su}(n+1)$ for $n>0$. Let $\{e_1,\ldots,e_{n+1}\}$ denote the standard orthonormal basis of $\mathbb{R}^{n+1}$. Then $\mathfrak{t}$ can be isometrically identified with the subspace $(e_1+\cdots+c_{n+1})^\perp \subset \mathbb{R}^{n+1}$. The root system $\Delta$ is \begin{equation*} \{\pm(e_i-e_j) \mid 1\leqq i < j\leqq n+1\}. \end{equation*} Let $E_{i,j}$ be the matrix with $1$ in the $(i,j)$-entry and all other entries zero. Then \begin{eqnarray*} e_i&=&\sqrt{-1}E_{i,i}\in\mathfrak{su}(n+1), \mbox{ and} \\ \mathfrak{g}_{\pm(e_i-e_j)}&=&\mathbb{R}(E_{i,j}-E_{j,i})+\mathbb{R}\sqrt{-1}(E_{i,j}+E_{j,i}). \end{eqnarray*} \smallskip (2) The case $\mathfrak{g}=B_n=\mathfrak{so}(2n+1)$ for $n>1$. The Cartan subalgebra $\mathfrak{t}$ can be isometrically identified with $\mathbb{R}^n$ with the standard orthonormal basis $\{e_1,\ldots,e_n\}$. The root system $\Delta$ is \begin{equation* \{\pm e_i \mid 1\leqq i\leqq n\} \cup \{\pm e_i\pm e_j \mid 1\leqq i<j\leqq n\}. \end{equation*} In terms of matrices, we have \begin{eqnarray*} e_i&=&E_{2i,2i+1}-E_{2i+1,2i}, \\ \mathfrak{g}_{\pm e_i}&=&\mathbb{R}(E_{2i,1}-E_{1,2i})+\mathbb{R}(E_{2i+1,1}-E_{1,2i+1}),\\ \mathfrak{g}_{\pm(e_i-e_j)}&=&\mathbb{R}(E_{2i,2j}+E_{2i+1,2j+1} -E_{2j,2i}-E_{2j+1,2i+1})\\ &&\phantom{X}+\mathbb{R}(E_{2i,2j+1}-E_{2i+1,2j}+E_{2j,2i+1}-E_{2j+1,2i}), \mbox{ and}\\ \mathfrak{g}_{\pm(e_i+e_j)}&=& \mathbb{R}(E_{2i,2j}-E_{2i+1,2j+1}-E_{2j,2i}+E_{2j+1,2i+1})\\ &&\phantom{X}+\mathbb{R}(E_{2i,2j+1}+E_{2i+1,2j}-E_{2j,2i+1}-E_{2j+1,2i}). \end{eqnarray*} \smallskip (3) The case $\mathfrak{g}=C_n=\mathfrak{sp}(n)$ for $n>2$. As before, $\mathfrak{t}$ is isometrically identified with $\mathbb{R}^n$ with the standard orthonormal basis $\{e_1,\ldots,e_n\}$. The root system $\Delta$ is \begin{equation* \{\pm 2e_i \mid 1\leqq i\leqq n\} \cup \{\pm e_i\pm e_j \mid 1\leqq i<j\leqq n\}. \end{equation*} In terms of matrices, we have \begin{eqnarray*} e_i &=&\mathbf{i}E_{i,i},\\ \mathfrak{g}_{\pm 2e_i}&=&\mathbb{R}\mathbf{j}E_{i,i}+\mathbb{R}\mathbf{k}E_{i,i},\\ \mathfrak{g}_{\pm(e_i-e_j)}&=&\mathbb{R}(E_{i,j}-E_{j,i})+\mathbb{R}\mathbf{i}(E_{i,j}+E_{j,i}),\mbox{ and}\\ \mathfrak{g}_{\pm(e_i+e_j)}&=&\mathbb{R}\mathbf{j}(E_{i,j}+E_{j,i})+ \mathbb{R}\mathbf{k}(E_{i,j}+E_{j,i}). \end{eqnarray*} (4) The case $\mathfrak{g}=D_n=\mathfrak{so}(2n)$ for $n>3$. The Cartan subalgebra $\mathfrak{t}$ is identified with $\mathbb{R}^n$ with the standard orthonormal basis $\{e_1,\ldots,e_n\}$. The root system $\Delta$ is \begin{equation* \{\pm e_i\pm e_j \mid 1\leqq i<j\leqq n\}. \end{equation*} In matrices, we have formulas for the $e_i$ and for the root planes for $e_i\pm e_j$ similar to those in the case of $B_n$, i.e. \begin{eqnarray*} e_i&=&E_{2i-1,2i}-E_{2i,2i-1}, \\ \mathfrak{g}_{\pm(e_i-e_j)}&=&\mathbb{R}(E_{2i-1,2j-1}+E_{2i,2j} -E_{2j-1,2i-1}-E_{2j,2i})\\ &&\phantom{X}+\mathbb{R}(E_{2i-1,2j}-E_{2i,2j-1}+E_{2j-1,2i}-E_{2j,2i-1}), \mbox{ and}\\ \mathfrak{g}_{\pm(e_i+e_j)}&=& \mathbb{R}(E_{2i-1,2j-1}-E_{2i,2j}-E_{2j-1,2i-1}+E_{2j,2i})\\ &&\phantom{X}+\mathbb{R}(E_{2i-1,2j}+E_{2i,2j-1}-E_{2j-1,2i}-E_{2j,2i-1}). \end{eqnarray*} \smallskip (5) The case $\mathfrak{g}=E_6$. The Cartan subalgebra $\mathfrak{t}$ can be isometrically identified with $\mathbb{R}^6$ with the standard orthonormal basis $\{e_1,\ldots,e_6\}$. The root system is \begin{equation* \{\pm e_i\pm e_j \mid 1\leqq i<j\leqq 5\} \cup \{\pm\frac12 e_1\pm\cdots\pm\frac12 e_5\pm\frac{\sqrt{3}}{2}e_6 \mbox{ with odd number of +'s}\}. \end{equation*} It contains a root system of type $D_5$. \smallskip (6) The case $\mathfrak{g}=E_7$. The Cartan subalgebra can be isometrically identified with $\mathbb{R}^7$ with the standard orthonormal basis $\{e_1,\ldots,e_7\}$. The root system is \begin{eqnarray* & &\{\pm e_i\pm e_j \mid 1\leqq i<j<7\} \cup \{\pm\sqrt{2}e_7; \frac12(\pm e_1\pm\cdots\pm e_6\pm\sqrt{2}e_7)\nonumber\\ & &\mbox{ with an odd number of plus signs among the first six coefficients}\}. \end{eqnarray*} It contains a root system of $D_6$. \smallskip (7) The case $\mathfrak{g}=E_8$. The Cartan subalgebra can be isometrically identified with $\mathbb{R}^8$ with the standard orthonormal basis $\{e_1,\ldots,e_8\}$. The root system $\Delta$ is \begin{eqnarray* &&\{\pm e_i\pm e_j \mid 1\leqq i<j\leqq 8\} \cup\nonumber\\ &&\{\frac12(\pm e_1\pm\cdots\pm e_8) \mbox{ with an even number of +'s}\}. \end{eqnarray*} It contains a root system of $D_8$. \smallskip (8) The case $\mathfrak{g}=F_4$. The Cartan subalgebra is isometrically identified with $\mathbb{R}^4$ with the standard orthonormal basis $\{e_1,\ldots,e_4\}$. The root system is \begin{eqnarray* \{\pm e_i \mid 1\leqq i\leqq 4\} \cup \{\pm e_i\pm e_j \mid 1\leqq i<j\leqq 4\} \cup \{\frac12(\pm e_1\pm\cdots\pm e_4)\}. \end{eqnarray*} It contains the root system of $B_4$. \smallskip (9) The case $\mathfrak{g}=G_2$. The Cartan subalgebra is isometrically identified with $\mathbb{R}^2$ with the standard orthonormal basis $\{e_1,e_2\}$. The root system $\Delta$ is \begin{eqnarray* \{(\pm\sqrt{3},0),(\pm\frac{\sqrt{3}}{2},\pm\frac{3}{2}), (0,\pm 1),(\pm \frac{\sqrt{3}}{2},\pm\frac{1}{2})\}. \end{eqnarray*}
{ "timestamp": "2015-05-22T02:11:57", "yymm": "1504", "arxiv_id": "1504.03018", "language": "en", "url": "https://arxiv.org/abs/1504.03018" }
\section{Introduction} Let $X$ be a non-singular quasi-projective variety of dimension $n$ defined over a field of arbitrary characteristic, and let $\mathcal E$ be a vector bundle of rank $r$ over $X$. Let ${\mathbb G_X({d}, \mathcal E)}$ be the Grassmann bundle of $\mathcal E$ over $X$ parametrizing corank $d$ subbundles of $\mathcal E$ with projection $\pi : {\mathbb G_X({d}, \mathcal E)} \to X$, and let $\mathcal Q \gets \pi^*\mathcal E$ be the universal quotient bundle of rank ${d}$ on ${\mathbb G_X({d}, \mathcal E)}$. We denote by $\theta$ the first Chern class $c_1(\det\mathcal Q)= c_1(\mathcal Q)$ of $\mathcal Q$, and call $\theta$ the {\it Pl\"ucker class} of ${\mathbb G_X({d}, \mathcal E)}$. Note that the determinant bundle $\det \mathcal Q$ is isomorphic to the pull-back of the tautological line bundle $\mathcal O_{\mathbb P_X(\wedge^{d} \mathcal E)}(1)$ of $\mathbb P_X(\wedge^{d} \mathcal E)$ by the relative Pl\"ucker embedding over $X$. The purpose of this short note is to give a closed formula for $\pi_{*}\theta^{N}$, the push-forward of powers $\theta^N$ of the Pl\"ucker class $\theta$ to $X$ by $\pi$, in terms of the Schur polynomials in Segre classes of $\mathcal E$, where $\pi_{*} : A^{*+d( r - d )}({\mathbb G_X({d}, \mathcal E)})\to A^{*}(X)$ is the push-forward by $\pi$ between the Chow rings. The result is \begin{theorem}\label{theorem:main_theorem} For each integer $N \ge d(r-d)$, we have $$ \pi_* \theta ^N = \sum_{ \vert \lambda \vert = N-d(r-d)} f^{\lambda+\varepsilon} \varDelta _{\lambda}(s(\mathcal E)) $$ in $A^{N-d(r-d)}(X)$, where $\lambda =(\lambda_1 , \dots, \lambda_d)$ is a partition with $\vert \lambda \vert := \sum _{1 \le i \le d} \lambda_{i}$, $\varDelta_{\lambda}(s(\mathcal E)):= \det[s_{\lambda_i+j-i}(\mathcal E)]_{1 \le i,j\le d}$ is the Schur polynomial in Segre classes of $\mathcal E$ corresponding to $\lambda$, $\varepsilon := (r-d)^d = (r-d, \dots, r-d)$, and $f^{\lambda + \varepsilon}$ is the number of standard Young tableaux with shape $\lambda+\varepsilon$. \end{theorem} The Segre classes $s_i(\mathcal E)$ here are the ones satisfying $s(\mathcal E)c(\mathcal E^{\vee})=1$ as in \cite{fujita}, \cite{laksov}, \cite{laksov-thorup}, where $s(\mathcal E)$ and $c(\mathcal E)$ denote respectively the total Segre class and the total Chern class of $\mathcal E$. Note that our Segre class $s_i(\mathcal E)$ differs by the sign $(-1)^i$ from the one in \cite{fulton}. \begin{corollary}[degree formula for Grassmann bundles]% \label{corollary:degree_formula} If $X$ is projective and $\wedge^{d} \mathcal E$ is very ample, then ${\mathbb G_X({d}, \mathcal E)}$ is embedded in the projective space $\mathbb P(H^0(X, \wedge^{d} \mathcal E))$ by the tautological line bundle $\mathcal O_{{\mathbb G_X({d}, \mathcal E)}}(1)$, and its degree is given by $$ \deg {\mathbb G_X({d}, \mathcal E)} = \sum_{ \vert \lambda \vert = n} f^{\lambda+\varepsilon} \int_X \varDelta _{\lambda}(s(\mathcal E)) . $$ \end{corollary} Here a vector bundle $\mathcal F$ over $X$ is said to be {\it very ample} if the tautological line bundle $\mathcal O_{\mathbb P_X(\mathcal F)}(1)$ of $\mathbb P_X(\mathcal F)$ is very ample. Setting $n:=0$, we recover the degree formula of Grassmann varieties, as follows: \begin{corollary}[{\cite[Example 14.7.11 (iii)]{fulton}}] \label{corollary:classical_degree_formula} Let $\mathbb G({d}, r)$ be the Grassmann variety parametrizing codimension $d$ subspaces of a vector space of dimension $r$. Then its degree with respect to the Pl\"ucker embedding is given by $$ \deg \mathbb G({d}, r) =\frac{{({d}(r -{d}))!} \prod_{1 \le l \le {d}-1} l ! }{\prod_{1 \le l \le {d}} (r - l )! } . $$ \end{corollary} \section{Proofs} \begin{proof}[{\it Proof of Theorem \ref{theorem:main_theorem}}] Let $\xi_1 , \dots, \xi_d$ be the Chern roots of the universal quotient bundle $\mathcal Q$. Then we can write $\theta = \xi_1 + \cdots + \xi_d$ formally. Using Pieri's formula \cite[\S2.2]{fulton-young} repeatedly, and applying Jacobi-Trudi identity \cite[I, (3.4)]{macdonald}, we obtain that $$ \theta^N = \sum_{\vert \mu \vert = N} f^{\mu} \varDelta_{\mu} (\underline{\xi}) = \sum_{\vert \mu \vert = N} f^{\mu} \varDelta_{\mu} (s(\mathcal Q)) , $$ where $\varDelta_{\mu}(\underline{\xi})$ is the Schur polynomial in $\underline{\xi}:= (\xi_1, \dots , \xi_d)$ corresponding to a partition $\mu$. It follows from the push-forward formula of J\'ozefiak-Lascoux-Pragacz \cite[Proposition 1]{jlp} that $$ \pi_*\varDelta_{\mu} (s(\mathcal Q)) = \varDelta_{\mu-\varepsilon} (s(\mathcal E)) . $$ Therefore we obtain $$ \pi_* \theta^N = \sum_{\vert \mu \vert = N} f^{\mu} \varDelta_{\mu-\varepsilon} (s(\mathcal E)) = \sum_{\vert \lambda \vert = N-d(r-d)} f^{\lambda+\varepsilon} \varDelta_{\lambda} (s(\mathcal E)) , $$ where $\lambda$ is a partition, and $\varepsilon := (r-d)^{d} = (r-d,\dots,r-d)$. \end{proof} \begin{proof}[{\it Proof of Corollary \ref{corollary:degree_formula}}] By the assumption ${\mathbb G_X({d}, \mathcal E)}$ is projective and the tautological line bundle $\mathcal O_{\mathbb P_X(\wedge^{d} \mathcal E)}(1)$ defines an embedding $\mathbb P_X(\wedge^{d} \mathcal E) \hookrightarrow \mathbb P(H^0(X, \wedge^{d} \mathcal E))$. Therefore ${\mathbb G_X({d}, \mathcal E)}$ is considered to be a projective variety in $\mathbb P(H^0(X, \wedge^{d} \mathcal E))$ via the relative Pl\"ucker embedding ${\mathbb G_X({d}, \mathcal E)} \hookrightarrow \mathbb P_X(\wedge^{d} \mathcal E)$ over $X$ defined by the quotient $\wedge^d\pi^*\mathcal E \to \wedge^d \mathcal Q=\det \mathcal Q$. Since the hyperplane section class of ${\mathbb G_X({d}, \mathcal E)}$ is equal to the Pl\"ucker class $\theta$, we obtain the conclusion, taking $N:=\dim {\mathbb G_X({d}, \mathcal E)} = {d} (r -{d}) + {n}$ in Theorem \ref{theorem:main_theorem}. \end{proof} \begin{proof}[{\it Proof of Corollary \ref{corollary:classical_degree_formula}}] The conclusion follows from Corollary \ref{corollary:degree_formula} with $n:=0$, since the number $f^{\lambda+\varepsilon}$ is known to be given as follows (\cite[p.53]{fulton-young} and \cite[p.54, Exercise 9]{fulton-young}): \begin{equation*} f^{\lambda+\varepsilon} = \frac{N! \prod_{1 \le i < j \le d } (\lambda _i - \lambda_j - i+ j ) }{\prod_{1 \le i \le d} (r + \lambda_i - i )!} . \qedhere \end{equation*} \end{proof} \begin{remark} Under the same assumption as in Theorem \ref{theorem:main_theorem}, one can prove a push-forward formula of the following form: \begin{equation*} \pi_* \theta^N = \sum_{\vert k\vert = N-d(r-d)} \frac{N! \prod_{1 \le i<j\le d} (k_i -k_j -i + j) } {\prod_{1 \le i \le d}(r + k_i -i)} \prod_{i=1}^{d} s_{k_i}(\mathcal E) \end{equation*} in $A^{N-d(r-d)}(X) \otimes \mathbb Q$, where $k = (k_1, \dots , k_d) \in \mathbb Z_{\ge 0}^d$ with $\vert k \vert := \sum_i k_i$, and $s_i(\mathcal E)$ is the $i$-th Segre class of $\mathcal E$. \end{remark} \medskip \noindent{\it Acknowlegements.} The authors would like to thank the first referee for his/her detailed comments and invaluable advice: In fact, our original proof is much longer than and completely different from the one given here, which is due to the referee. The authors thank Professor Hiroshi Naruse and Professor Takeshi Ikeda, too, for useful discussion and kind advice. The first author is supported by JSPS KAKENHI Grant Number 25400053. The second author is supported by JSPS KAKENHI Grant Number 15H02048.%
{ "timestamp": "2015-04-15T02:03:30", "yymm": "1504", "arxiv_id": "1504.03400", "language": "en", "url": "https://arxiv.org/abs/1504.03400" }
\section{Introduction} \noindent Modeling and forecasting covariances or volatility matrices of asset returns play a crucial role in many financial fields, such as portfolio allocation \citep{M52} and asset pricing \citep{BEW88}. With the availability of intraday financial data nowadays, it becomes possible to estimate volatilities and co-volatilities of asset returns using high-frequency data directly, which leads to the so-called {\em realized covariance matrix} (\citealp{ABDL03}, \citealp{BS04} and \citealp{BHLS11}). Two major problems arise in the estimation of realized covariance matrices. Firstly, transactions for different assets are typically asynchronous so that the high-frequency prices of different assets do not change simultaneously. Secondly, it is widely believed that the observed high-frequency prices are accompanied by {\em microstructure noise} so that the observed prices should be thought as a noisy version of the true underlying price process. Researchers have proposed several ways to tackle these problems, for example, the overlap intervals and previous tick method by \citet{HY05} and \citet{Z11}, respectively. Moreover, \citet{BMOV12} use a refresh time scheme and \citet{CKP10} propose the pre-averaging approach. Once constructed, realized covariance matrices are analyzed using multivariate models. There are two major issues to be resolved in modeling realized covariance matrices. The first issue is that the model should guarantee the positive definiteness of fitted covariance matrices. A natural choice in this aspect is the family of matrix-valued Wishart distributions which automatically generates random positive definite matrices without imposing additional constraints. Several models related to the Wishart distribution have been put forward. \citet{GJS09} propose the Wishart Autoregressive (WAR) model where the realized covariance matrix has a conditional distribution which is noncentral Wishart with a non-centrality parameter depending on lagged covariances and a fixed scaling matrix. Later, \citet{GGL12} propose the Conditional Autoregressive Wishart (CAW) model under which the conditional distribution is central Wishart with time dependent scaling matrices. Moreover, \citet{NLY14} generalize the above two models to construct Generalized Conditional Autoregressive Wishart (GCAW) model. There are other models involving the Wishart distribution, for instance, see \citet{JM09}. The second issue in modeling realized covariance matrices is the high-dimensionality. Indeed, covariance matrices have $d(d+1)/2$ entries for $d$ assets; consequently, the number of parameters in the model for realized covariance matrices grows quickly with $d$. For example, for an unrestricted CAW(2,2) model with $d=10$ assets, as many as 456 parameters are needed so that it is quite challenging to fit such a model in practice. This is probably the major reason why all the empirical studies we find in the literature on model fitting for realized covariance matrices are all limited to a {\em small number}, say 3 to 5 assets. Another problem inherent in realized covariance matrices is that they deviate from their population counterpart, the so-called {\em integrated covariance matrix}, when the number of assets is large compared to the sample size (\citealp{JL09}; and \citealp{WZ10}). It is therefore important to build statistical models for realized covariance matrices with a large dimension, say several tens. Improved estimators of realized covariance matrices are proposed in \citet{WZ10} with a so-called averaging realized volatility matrix (ARVM) estimator. \citet{TWYZ11} propose the threshold averaging realized volatility matrix (TARVM) estimator which is of two-scale and uses the previous-tick method and the threshold technique in constructing realized volatility matrices. Then, inspired by \citet{Z06} and \citet{FW07}, \citet{TWC13} propose the threshold multi-scale realized volatility matrix (TMSRVM) estimator. The TARVM and TMSRVM estimators are shown to be consistent for the integrated covariance matrix when the dimension of the realized covariance matrix, the sample size of intraday points and the length of sampling days go to infinity. In addition, the TMSRVM estimator proves to have the optimal convergence rate under the existence of the microstructure noise. As an effort to control the parametric dimension of the models, \citet{TWYZ11} propose to first identify a small number of factors for the realized covariance matrices and to fit a vector autoregressive (VAR) model to the vectorized factor covariance matrices. They show that such a factor model significantly reduces the number of parameters needed to fit the realized covariance matrices. However, the VAR specification is not able to ensure the positive definiteness of the predicted factor covariance matrices. In addition (see below), such a VAR fit still needs $\mathcal{O}(r^4)$ number of parameters with $r$ factors. In this article, we adopt the factor model approach introduced in \citet{TWYZ11} for realized covariance matrices. In order to overcome the aforementioned weakness of their VAR fit for the extracted factors, we propose a diagonal CAW model which has several advantages. Firstly, the proposed CAW model is able to guarantee automatically the positive definiteness of the covariance matrices generated from the model without imposing additional constraints. Secondly, as will be shown by extensive data analysis reported in this paper, our model has excellent empirical performance in terms of the reduction of number of parameters compared to the VAR approach proposed in \citet{TWYZ11}: indeed we obtain comparable forecasting performance with much less parameters. In a related work, \citet{AM14} also use a combination of factor extraction and CAW modeling and report some empirical studies with $7$ assets which is still considered to be small dimension. In this paper, we focus on a larger collection of assets where empirical studies are carried out for 30 assets. A further difference in this paper is that we also propose a thorough theoretical analysis of both the factor modeling and the CAW estimation. The rest of the paper is organized as follows. Section 2 introduces the model setup and our approach based on a factor model and a diagonal CAW model for the extracted factors. In Section 3, the asymptotic theory is established. In Sections 4 and 5, we report the middle and large scale data analysis on asset prices, respectively. Conclusions are presented in Section 6. Proofs of the asymptotic theory are provided in the Appendix.\\ \section{Methodology} \subsection{Model setup} \noindent Suppose there are $d$ assets and their log price process $\mathbf{X}(t) = \{X_1(t),\cdots,X_d(t)\}'$ follows a continuous diffusion model: \begin{equation} d\mathbf{X}(t) = \mathbf{\mu}_tdt + \mathbf{\sigma}_td\mathbf{W}_t, \quad t \in [0,T], \end{equation} where $\mathbf{\mu}_t$ is a drift in $\mathbb{R}^d$, $\mathbf{W}_t$ a standard $d$-dimensional Brownian motion and $\mathbf{\sigma}_t$ a $d \times d$ matrix. The {\em integrated volatility matrix} for the $t$-th day is defined as \begin{equation} \mathbf{\Sigma}_x(t) = \int_{t-1}^{t} \mathbf{\sigma}_s\mathbf{\sigma}_s' ds, \quad t = 1,\cdots,T. \end{equation} However, it is commonly admitted that the microstructure noise is inherent in the high-frequency price process so that we are not able to observe directly $X_i(t)$, but $Y_i(t_{i\ell})$, a noisy version of $X_i(\cdot)$ at times $t_{i\ell}$, $\ell=1,\cdots, n_i$, $i = 1, \cdots, d$. Here, $n_i$ is the total trading times and $t_{i\ell}$ is the $\ell$-th trading times of asset $i$ during a giving trading day $t$. The observations $Y_i(t_{i\ell})$ is allowed to be non-synchronized, i.e. $t_{i\ell} \ne t_{j\ell}$ for any $i \ne j$. In this paper, we assume that \begin{equation} Y_i(t_{i\ell}) = X_i(t_{i\ell}) + \epsilon_i(t_{i\ell}) \end{equation} where $\epsilon_i(t_{i\ell})$ are i.i.d. microstructure noise with mean zero and variance $\eta_i$, and $\epsilon_i(\cdot)$ and $X_i(\cdot)$ are independent with each other.\\ \subsection{Realized covariance matrix estimator} \noindent Several issues arise for the estimation of $\mathbf{\Sigma}_x(t)$: 1. asynchronous observations of different assets; 2. microstructure noise; 3. the number of assets can be larger than the sample size. In this paper, we adopt the threshold Multi-Scale Realized Volatility Matrix estimator (threshold MSRVM) proposed by \citet{TWC13}, denoted by $\mathbf{\hat{\Sigma}}_x(t)$, $t = 1,\cdots,T$. The threshold MSRVM estimator has many attractive properties, for instance, it is consistent for the high-dimensional integrated co-volatility matrix with the optimal convergence rate. Briefly, the idea of the threshold MSRVM estimator is the following: the previous-tick method is used to construct the raw realized covariance matrices. Then, a multi-scale estimator is evaluated which is actually a kind of average of those raw estimators. In addition, the multi-scale estimator is regularized using a thresholding method, that is, matrix entries under a threshold are set to be zero.\\ \subsection{Matrix factor model} \noindent We adopt the factor model proposed by \citet{TWYZ11} to reduce the large dimension of $\mathbf{\Sigma}_x(t)$: \begin{equation} \mathbf{\Sigma}_x(t) = \mathbf{A}\mathbf{\Sigma}_f(t)\mathbf{A}'+\mathbf{\Sigma}_0, \end{equation} for $t=1,\cdots,T$, where $\mathbf{\Sigma}_f(t)$ are $r \times r$ positive definite factor covariance matrices, $\mathbf{\Sigma}_0$ is a $d \times d$ positive definite constant matrix and $\mathbf{A}$ is a $d \times r$ factor loading matrix normalized by the constraint $\mathbf{A}'\mathbf{A}=\mathbf{I}_r$. As in a standard factor model, only the left-hand side of Equation (4) is observed. The unknown quantities are estimated using the method in \citet{TWYZ11}. Let \begin{equation} \mathbf{\bar{\Sigma}}_x=\frac{1}{T}\sum_{t=1}^{T}\mathbf{\Sigma}_x(t), \quad \mathbf{\bar{S}}_x=\frac{1}{T}\sum_{t=1}^{T}\{\mathbf{\Sigma}_x(t)-\mathbf{\bar{\Sigma}}_x\}^2, \end{equation} and \begin{equation} \mathbf{\bar{\hat{\Sigma}}}_x=\frac{1}{T}\sum_{t=1}^{T}\mathbf{\hat{\Sigma}}_x(t), \quad \mathbf{\bar{\hat{S}}}_x=\frac{1}{T}\sum_{t=1}^{T}\{\mathbf{\hat{\Sigma}}_x(t)-\mathbf{\bar{\hat{\Sigma}}}_x\}^2. \end{equation} Next, the estimator $\mathbf{\hat{A}}$ is obtained using the $r$ orthonormal eigenvectors of $\mathbf{\bar{\hat{S}}}_x$, corresponding to its $r$ largest eigenvalues, as its columns. Finally, the estimated factor covariance matrices are \begin{equation} \mathbf{\hat{\Sigma}}_f(t) = \mathbf{\hat{A}}'\mathbf{\hat{\Sigma}}_x(t)\mathbf{\hat{A}} \end{equation} for $t = 1,\cdots,T$; and $\mathbf{\Sigma}_0$ is estimated by \begin{equation} \mathbf{\hat{\Sigma}}_0 = \mathbf{\bar{\hat{\Sigma}}}_x - \mathbf{\hat{A}}\mathbf{\hat{A}}'\mathbf{\bar{\hat{\Sigma}}}_x\mathbf{\hat{A}}\mathbf{\hat{A}}'. \end{equation} \subsection{CAW modeling for factor covariance matrix} \noindent With the estimated factor covariance matrices $\{\mathbf{\hat{\Sigma}}_f(t)\}$ calculated by (7) and (8), we construct a dynamic structure by fitting a diagonal CAW model to $\mathbf{\tilde{\Sigma}}_f(t)$, where $\mathbf{\tilde{\Sigma}}_f(t) := \mathbf{\Sigma}_f(t)+\mathbf{A}'\mathbf{\Sigma}_0\mathbf{A}$. Here, $\mathbf{\tilde{\Sigma}}_f(t)$ is modeled rather than $\mathbf{\Sigma}_f(t)$ since $\mathbf{\hat{\Sigma}}_f(t)$ is in fact a consistent estimator of the former, and it is impossible to construct a consistent estimator for the latter, to our knowledge. The model is defined as follows. Let $\mathcal{F}_{t-1} = \sigma(\mathbf{\tilde{\Sigma}}_f(s),s<t)$ be the past history of the process at time $t$. Conditional on $\mathcal{F}_{t-1}$, $\mathbf{\tilde{\Sigma}}_f(t)$ follows a central Wishart distribution \begin{equation} \mathbf{\tilde{\Sigma}}_f(t)|\mathcal{F}_{t-1} \sim \mathcal{W}_n(\nu,\mathbf{S}_f(t)/\nu), \end{equation} with $\nu$ the degrees of freedom and the scaling matrix $\mathbf{S}_f(t)$. Moreover, the scaling matrix $\mathbf{S}_f(t)$ follows a linear recursion of order $(p,q)$ \begin{equation} \mathbf{S}_f(t) = CC' + \sum_{i=1}^{p}B_i\mathbf{S}_f(t-i)B_i'+\sum_{j=1}^{q}A_j\mathbf{\tilde{\Sigma}}_f(t-j)A_j', \end{equation} where $A_j$, $B_i$ and $C$ are all $r \times r$ matrices of coefficients. In summary, the CAW process depends on the parameters $\{\nu,C, (B_i)_{1 \le i \le p}, (A_j)_{1 \le j \le q}\}$ without additional constraints, so that the total number of parameters is equal to $(p+q)r^2+\frac{r(r+1)}{2}+1= \mathcal{O}(r^2)$ which still grows quickly with the number of factors $r$ and the order $p$ and $q$. Since the main aim of the paper is to propose a practically feasible model for a large number of assets while retaining efficiency, we will restrict ourselves to {\em diagonal} coefficient matrices $C$, $(B_i)_{1 \le i \le p}$ and $(A_j)_{1 \le j \le q}$. Therefore, the number of parameters becomes $(p+q+1)r+1 = \mathcal{O}(r)$. Notice that this setup is also supported by the fact that in the literature (\citealp{MS92}, \citealp{EK95}), researchers tend to use diagonal volatility models to avoid overparameterization and argue that the variances and the covariances rely more on its own past than the history of other variances or covariances. In the empirical study developed below, we find that the diagonal models achieve a comparable performance with unrestricted ones, while being much more parsimonious and requiring far less computing time. Notice however, the asymptotic theory developed below is also valid for unrestricted matrices $A_j$'s, $B_i$'s and $C$. The estimation of the parameters $\theta = (\nu, \text{diag}(C)',\text{diag}(B_i)_{1 \le i \le p}', \text{diag}(A_j)_{1 \le j \le q}')'$ of the diagonal CAW$(p,q)$ model is carried out by maximizing the log-likelihood function using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization procedure. Positivity of the diagonal elements $C_{kk}$, $A_{11,j}$ and $B_{11,i}$ are enforced, where $1 \le k \le r$. The log-likelihood function is \begin{eqnarray} \mathcal{L}(\theta) &=& \frac{1}{T}\sum_{t=1}^{T}\{-\frac{\nu r}{2}\text{ln}(2) - \frac{r(r-1)}{4}\text{ln}(\pi)-\sum_{i=1}^{r}\text{ln}\Gamma(\frac{\nu+1-i}{2}) -\frac{\nu}{2} \text{ln}|\frac{\mathbf{S}_f(t)}{\nu}| \nonumber \\ && + (\frac{\nu-r-1}{2})\text{ln}|\mathbf{\tilde{\Sigma}}_f(t)|-\frac{1}{2}\text{tr}(\nu\mathbf{S}_f(t)^{-1}\mathbf{\tilde{\Sigma}}_f(t))\} \end{eqnarray} In practice, initial values for $\mathbf{S}_f(t)$ are needed to run the maximization of the log-likelihood function. For example, if the order $(p,q)=(2,2)$ is used, then the initial values $\mathbf{S}_f(1)$ and $\mathbf{S}_f(2)$ are needed. In the empirical analysis of this paper, we take $\mathbf{S}_f(1)=\mathbf{\hat{\Sigma}}_f(1)$ and $\mathbf{S}_f(2)=\mathbf{\hat{\Sigma}}_f(2)$ as $\mathbf{S}_f(t)$ is the conditional expectation of $\mathbf{\tilde{\Sigma}}_f(t)$, for any $t$.\\ \section{Asymptotic theory} \noindent Given a $d$-dimensional vector $\mathbf{x} = (x_1,\cdots,x_d)'$ and a $d \times d$ matrix $\mathbf{U} = (U_{ij})$, define vector and matrix norms as \begin{equation} ||\mathbf{x}||_2 = (\sum_{i=1}^{d}|x_i|^2)^{1/2}, \quad ||\mathbf{U}||_2 = \text{sup}\{||\mathbf{Ux}||_2,||\mathbf{x}||_2=1\} \end{equation} In fact, $||\mathbf{U}||_2$ is the spectral norm, equal to the square root of the largest eigenvalue of $\mathbf{U}'\mathbf{U}$. In addition, define the Frobenius norm of the $d \times d$ matrix $\mathbf{U} = (U_{ij})$ as \begin{equation} ||\mathbf{U}||_F = \sqrt{\sum_{i=1}^{d}\sum_{j=1}^{d}|U_{ij}|^2} \end{equation} \noindent The asymptotic theory below uses the following assumptions.\\ (A1) All row vectors of $\mathbf{A}'$ and $\mathbf{\Sigma}_0$ in the factor model (4) satisfy the sparsity condition (13) below. We say that a $d$-dimensional vector $\mathbf{x}=(x_1,\cdots,x_d)'$ is sparse if \begin{equation} \sum_{j=1}^{d} |x_i|^{\delta} \le C\pi(d), \end{equation} where $0 \le \delta <1$, $\pi(d)$ is a deterministic function of $d$ that grows slowly in $d$, such as $\pi(d) = 1$ or $\text{ln}(d)$, and $C$ is a positive constant.\\ (A2) The factor model (4) has $r$ fixed factors, with $\mathbf{A}'\mathbf{A}=\mathbf{I}_r$, and matrices $\mathbf{\Sigma}_0$ and $\mathbf{\Sigma}_f$ satisfy \begin{eqnarray} ||\mathbf{\Sigma}_0||_2 &<& \infty \nonumber \\ \max_{1 \le t \le T} |\mathbf{\Sigma}_{f,jj}(t)| &=& \mathcal{O}_p(B(T)), \quad j=1,\cdots,r. \end{eqnarray} where $1 \le B(T)=o(T)$.\\ (A3) $\max_{1 \le t \le T}\parallel \mathbf{\hat{\Sigma}}_x(t) - \mathbf{\Sigma}_x(t) \parallel_2 = \mathcal{O}_p(A(d,T,n))$, for some rate function $A(d,T,n)$ such that $A(d,T,n)B^5(T)=o(1)$ with $B(T)$ defined in (A2).\\ (A4) $A_j$'s and $B_i$'s are such that the CAW model (9) and (10) are stationary and ergodic.\\ (A5) The parameter set $\Theta$ for all parameters $A_j$'s, $B_i$'s, $C$ and $\nu$ is compact for the CAW model (9) and (10).\\ (A6) The Hessian matrix $\partial^2\mathcal{L}(\mathbf{\theta})/\partial\theta_i\partial\theta_j$ converges to some deterministic matrix function $D(\mathbf{\theta})$ as $T$ goes to infinity which is of full rank for all $\mathbf{\theta} \in \Theta$.\\ The first two conditions are from \citet{TWYZ11} which are used to prove the consistency of $\mathbf{\hat{\Sigma}}_f(t)$. In this paper, as we are using the threshold MSRVM estimator, we take $A(d,T,n)=\pi(d)[e_n(d^2T)^{1/\beta}]^{1-\delta}\text{ln}T$ and $B(T) = \text{ln}T$, where $e_n \sim n^{-1/4}$, consistent with \citet{TWC13}. In addition, the constant $\beta$ can be taken large enough (under reasonable moment conditions) so that $A(d,T,n)B^5(T)$ will go to $0$ as $n$, $d$, and $T$ go to infinity. Consequently, we assume here these two values shall converge to $0$ in some sense.\\ \noindent \begin{thm} Suppose the models (1), (3) and (4) satisfy Conditions (A1)-(A3). Denote the ordered eigenvalues of $\bar{\mathbf{S}}_x$ by $\lambda_1 \ge \cdots \ge \lambda_d$. Let $\mathbf{a}_1, \cdots, \mathbf{a}_r$ be the eigenvectors of $\bar{\mathbf{S}}_x$ corresponding to the $r$ largest eigenvalues $\lambda_1 , \cdots , \lambda_r$. Also set $\hat{\lambda}_1 \ge \cdots \ge \hat{\lambda}_r$ be the $r$ largest eigenvalues of $\bar{\hat{\mathbf{S}}}_x$ and $\mathbf{\hat{a}}_1, \cdots, \mathbf{\hat{a}}_r$ the corresponding eigenvectors. Let $\mathbf{A}=(\mathbf{a}_1, \cdots, \mathbf{a}_r)$ and $\mathbf{\hat{A}}=(\mathbf{\hat{a}}_1, \cdots, \mathbf{\hat{a}}_r)$. As $n$, $d$, $T$ go to infinity, we have \begin{eqnarray} \mathbf{A}'\mathbf{\hat{A}} - \mathbf{I}_r &=& \mathcal{O}_p(A(d,T,n)B(T)), \nonumber \\ \mathbf{\hat{\Sigma}}_f(t) - \mathbf{\Sigma}_f(t) - \mathbf{A}'\mathbf{\Sigma}_0\mathbf{A} &=& \mathcal{O}_p(A^{1/2}(d,T,n)B^{3/2}(T)). \end{eqnarray} \end{thm} \noindent \begin{thm} Suppose that $\mathbf{\hat{\theta}}$ is the maximized log-likelihood estimator of $\mathbf{\theta}$ based on the data $\mathbf{\hat{\Sigma}}_f(t)$ from the CAW model and $\mathbf{\tilde{\theta}}$ is the maximized log-likelihood estimator based on the true data $\mathbf{\tilde{\Sigma}}_f(t)$ from the same CAW model. Then under Conditions (A1)-(A5), \begin{equation} \mathbf{\hat{\theta}}-\mathbf{\tilde{\theta}} = \mathcal{O}_p(A^{1/2}(d,T,n)B^{5/2}(T)) \end{equation} \end{thm} \section{Data analysis 1} \noindent We apply the proposed methodology to two datasets. In this section, we focus on comparing the VAR and CAW models, using 5-minutes intraday data of 30 stocks traded in the US stock market over a period of 103 days where the data are obtained directly from Bloomberg.\\ \subsection{Data description} \noindent We use $30$ stocks traded at the New York Stock Exchange, which consist of 27 components of Dow Jones Index : 3M (MMM), American Express (AXP), AT$\&$T (T), Boeing (BA), Caterpillar (CAT), Chevron (CVX), Coca-Cola (KO), Dupont (DD), ExxonMobil (XOM), General Electric (GE), Goldman Sachs (GS), The Home Depot (HD), IBM (IBM), Johnson $\&$ Johnson (JNJ), JPMorgan Chase (JPM), McDonald's (MCD), Merck (MRK), Nike (NKE), Pfizer (PFE), Procter $\&$ Gamble (PG), Travelers (TRV), UnitedHealth Group (UNH), United Technologies (UTX), Verizon (VZ), Visa (V), Wal-Mart (WMT), Walt Disney (DIS) and three former components of Dow Jones Index: Honeywell (HON), Citigroup (C) and American International Group (AIG). The daily realized covariance matrix is computed as $\mathbf{\hat{\Sigma}}_x(t)=\sum_{j=1}^{M} y_{t,j}y_{t,j}'$, where $y_{t,j}$ is the vector of log-returns for the $30$ stocks computed for the $j$th 5-minute interval of trading day $t$ except the first and last half hour. The sample period starts at May 3, 2013 and ends on September 30, 2013, totally 103 trading days. (Here, we exclude July 4 due to the incompleteness of data). This generates a series of 103 matrices of $\mathbf{\hat{\Sigma}}_x(t)$, which are 30 by 30. Descriptive statistics of the realized variances and covariances are provided in Table $\ref{t1}$. We only show some entries due to limited space. The following properties are found: \begin{enumerate} \item Among 30 realized variances and 435 covariances, only 3 realized covariances are skewed to the left rather than to the right. \item All realized variances and covariances have bigger kurtosis than that of the normal distribution except for 2 realized covariances. \end{enumerate} For the next two subsections, we show the estimated results for both the diagonal CAW and VAR models when the first $k=98$ days are treated as data points.\\ \subsection{Model fitting} \noindent The eigenvalues of the sample variance matrix $\mathbf{\bar{\hat{S}}}_x$ are evaluated, and are shown in Figure $\ref{f1}$. The plots show that the three largest eigenvalues are much larger than the others, which indicates that the factor number $r=3$ is appropriate. Let $\mathbf{\hat{A}}$ be the eigenvector of $\mathbf{\bar{\hat{S}}}_x$ corresponding to the three largest eigenvalues. We then calculate the factor volatility matrices $\mathbf{\hat{\Sigma}}_f(t)$, which are 3 by 3. Figure $\ref{f2}$ shows time series plots of the variances and covariances of the factor covariance matrices. Then, we fit the diagonal CAW model to the matrix series. Different orders are used to make comparison, namely $(p,q)=(0,1)$, $(p,q)=(1,1)$, $(p,q)=(1,2)$, $(p,q)=(2,1)$ and $(p,q)=(2,2)$. \\ \begin{remark} The initial value for each numerical optimization is chosen randomly and we repeat 160 times to choose the one with the largest log-likelihood value. \end{remark} \subsection{VAR model} \noindent For comparison purpose, we also fit the VAR model, advocated in \citet{TWYZ11}, to the vectorized factor covariance matrix $\mathbf{\hat{\Sigma}}_f(t)$, which is a vector with 6 entries. Using the package "vars" in R, we select the VAR(1) model as all the model selection criteria, namely Akaike information criterion (AIC), Hannan-Quinn (HQ), Schwarz criterion (SC) and final prediction error (FPE), choose the order 1, as shown in Table $\ref{t2}$. The model is \begin{displaymath} \text{vech}\{\tilde{\mathbf{\Sigma}}_f(t)\} = \mathbf{A}_0 + \mathbf{A}_1 \text{vech}\{\tilde{\mathbf{\Sigma}}_f(t-1)\} + \mathbf{e}(t) \end{displaymath} where $\mathbf{A}_0$ is a $6$-dimensional vector, $\mathbf{A}_1$ is a $6 \times 6$ square matrix, and $\mathbf{e}(t)$ is a $6$-dimensional vector white noise process with zero mean and finite fourth moments. Both $\mathbf{A}_0$ and $\mathbf{A}_1$ are estimated by the least squares method. The fitted VAR(1) model has the coefficients:\\ \noindent $\hat{\mathbf{A}}_0 = 10^{-5} \begin{pmatrix} 54.49 \\ -1.482 \\ 4.937 \\ 7.632 \\ 0.164 \\ 10.27 \end{pmatrix}$, \quad $\hat{\mathbf{A}}_1 = 10^{-2} \begin{pmatrix} 55.8 & -31.2 & 19.6 & -37.4 & -199.1 & -8.0 \\ -4.1 & 20.6 & 3.2 & 7.7 & 45.4 & -2.0 \\ 6.8 & 15.3 & 72.8 & -6.5 & -230.9 & -0.1 \\ 0.5 & -26.5 & -84.6 & 4.4 & 283.8 & -6.0 \\ -0.6 & 2.5 & 2.2 & 1.0 & -3.7 & -0.3 \\ 10.1 & 43.5 & -17.4 & 11.1 & -207.5 & 13.4 \end{pmatrix}$\\ \noindent The fitted models are shown in Figure $\ref{f3}$ which indicates that the VAR(1) provides adequate fit to the data. For each of the 6 time series, we show the in-sample fit in the upper panel, where the solid line stands for the real data and the dashed line is the fitted series. The residuals are shown in the middle panel while the ACF and PACF of residuals are displayed in the lower panel. The ACF and PACF plots show that the residuals are time-uncorrelated.\\ \subsection{Performance comparison in out-of-sample forecasting} \noindent We compare the out-of-sample one-day-ahead forecast performance of the diagonal CAW and VAR models. The one-day-ahead realized covariance is calculated by: 1. Obtain the one-day-ahead factor covariance matrix by conditional expectation; 2. Plug the forecast factor covariance matrix into the factor model (4) to get the predicted realized covariance matrix. The predictive accuracy is measured with both the Frobenius norm and the spectral norm. We take the first $k$ days as data and forecast the next day, where $k=80,\cdots,98$. Every model is re-estimated and the new forecasts are generated based upon the new parameter estimates. Then, we take the average of errors during 19 periods to do the comparison. Moreover, the errors of the inverse of matrices are also compared. Table $\ref{t3}$ contains the results of the prediction accuracy of the two models using different norms. The main findings are as follows. \begin{enumerate} \item The CAW models with order $(p,q)=(1,1)$ performs the best as it has the smallest error under both Frobenius and spectral norms. In general, All CAW models have similar performance except the ones with order $(p,q)=(0,1)$. \item The CAW models have similar performance with that of the VAR(1) model except the one with order $(p,q)=(0,1)$. \item However, the diagonal CAW model needs far less parameters than the VAR model does. Here, the best CAW model with order $(p,q)=(1,1)$ only needs 10 parameters, just a quarter of the number of parameters that the VAR(1) model needs. \item As $d$ goes up, we can imagine that $r$ will become larger; in this situation, the parameters that we need for the diagonal CAW model will be even far less than that we need for the VAR model. \end{enumerate} \begin{remark} \noindent \begin{enumerate} \item We do predictions for 2 to 5 days ahead in addition to the one-day forecast. We find that the performance of predictions for longer horizons is similar with that for the one day, in general. \item One problem for the VAR model is that it cannot assure that the predicted factor covariance matrix is positive definite. We have checked that in this data analysis, the predicted factor covariance matrices are actually all positive definite. This may be due to the reason that the estimated factor variances are larger than the covariances (in absolute value). \end{enumerate} \end{remark} \section{Data analysis 2} \subsection{Data description} \noindent We use the same 30 stocks as in the previous section. The raw tick-by-tick trading data are downloaded from TAQ database of Wharton Research Data Service. The data period starts at January 3, 2012 and ends on December 31, 2012, with totally 250 trading days. We firstly conduct data cleaning with the procedures introduced in \citet{BG06} and \citet{BHLS09}. The steps are the following:\\ \begin{enumerate} \item Delete entries with a time stamp outside 9:30 am - 4:00 pm when the exchange is open. \item Delete entries with a time stamp inside 9:30 - 10:00 am or 3:30 - 4:00 pm to eliminate the open and end effect of price fluctuation. \item Delete entries with the transaction price equal to zero. \item If multiple transactions have the same time stamp, use the median price. \item Delete entries with prices which are outliers. Let $\{p_i\}_{i=1}^N$ be an ordered tick-by-tick price series. We treat the $i$-th price as an outlier if $|p_i-\bar{p}_i(k)|>3s_i(k)$, where $\bar{p}_i(k)$ and $s_i(k)$ denote the sample mean and sample standard deviation of a neighborhood of $k$ observations around $i$, respectively. For the beginning prices which may not have enough left hand side neighbors, we get $k-i$ neighbors from $i+1$ to $k+1$. Similar procedures are taken for the ending prices. \end{enumerate} Then, we construct the threshold MSRVM estimator based on the cleaned tick-by-tick data following the steps in \citet{TWC13}. We set the threshold to be $5\%$ of the largest of the absolute value of entries in the matrix. Descriptive statistics of selected realized variances and covariances are provided in Table $\ref{t4}$. In addition, two plots for realized variances and two plots for realized covariances are shown in Figure $\ref{f4}$. We find the following properties: \begin{enumerate} \item All 30 realized variances and 435 covariances are skewed to the right, with mean skewness $1.39$. \item All realized variances and covariances have bigger kurtosis than that of the normal distribution, with mean kurtosis $5.92$ showing fat tails. \item The realized variances and covariances have significant fluctuations during the year, indicated by the graphs. \end{enumerate} For the next two subsections, we show the estimated results for both the diagonal CAW and VAR models when the first $k=220$ days are treated as data points.\\ \subsection{Model fitting} \noindent The eigenvalues of the sample variance matrix $\mathbf{\bar{\hat{S}}}_x$ are evaluated, and are shown in Figure $\ref{f5}$. We choose four factors for the model, as there is a dicernable drop between the fourth and the fifth eigenvalues, though the second to fourth eigenvalues are much less than the biggest one. Let $\mathbf{\hat{A}}$ be the eigenvectors of $\mathbf{\bar{\hat{S}}}_x$ corresponding to the four largest eigenvalues. We calculate the factor volatility matrices $\mathbf{\hat{\Sigma}}_f(t)$, which are 4 by 4. Then, we fit the diagonal CAW model to the matrix series. Different orders are used to make comparison, namely $(p,q)=(0,1)$, $(p,q)=(1,1)$, $(p,q)=(1,2)$, $(p,q)=(2,1)$ and $(p,q)=(2,2)$. For every estimation, we randomly choose 60 initial values for each optimization of the log-likelihood and choose the one with the largest log-likelihood value to give the estimated parameters.\\ \subsection{VAR model} \noindent For comparison purpose, we also fit the VAR model to the vectorized factor covariance matrix $\mathbf{\hat{\Sigma}}_f(t)$, which is a vector with 10 entries. Again, using the package "vars" in R, we select the VAR(1) model as all the model selection criteria AIC, HQ, SC and FPE choose the order 1, as shown in Table $\ref{t5}$. The model is \begin{displaymath} \text{vech}\{\tilde{\mathbf{\Sigma}}_f(t)\} = \mathbf{A}_0 + \mathbf{A}_1 \text{vech}\{\tilde{\mathbf{\Sigma}}_f(t-1)\} + \mathbf{e}(t) \end{displaymath} where $\mathbf{A}_0$ is a $10$-dimensional vector, $\mathbf{A}_1$ is a $10 \times 10$ square matrix, and $\mathbf{e}(t)$ is a $10$-dimensional vector white noise process with zero mean and finite fourth moments. Both $\mathbf{A}_0$ and $\mathbf{A}_1$ are estimated by the least squares method. We denote the estimator of $\mathbf{A}_1$ by $\mathbf{\hat{A}}_1$. We find that $|\mathbf{\hat{A}}_1| = -1.03 \times 10^{-7}$ and the biggest absolute eigenvalues is $0.4906 < 1$ which ensures the stationarity of the VAR model. In addition, the sparsity of $\mathbf{\hat{A}}_1$ is checked. We compute the values $\frac{1}{10^2}\sum_{i,j} |a_{i,j}|^m$ for different $0 \le m < 1$, where $\mathbf{\hat{A}}_1 = \{a_{i,j}\}_{1 \le i,j \le 10}$. The following is the result. We can see from Table $\ref{t6}$ that the average absolute value of the entries of $\mathbf{\hat{A}}_1$ is small (less than 1) so that we can consider $\mathbf{\hat{A}}_1$ as almost sparse.\\ \subsection{Performance comparison in out-of-sample forecasting} \noindent We compare the out-of-sample one-day-ahead forecast performance of the diagonal CAW and VAR models. The prediction of the one-day-ahead realized covariance is calculated by: 1. Predict the one-day-ahead factor covariance matrix by conditional expectation; 2. Plug the forecast factor covariance matrix into the factor model (4) to get the predicted realized covariance matrix. The predictive accuracy is measured with both the Frobenius norm and the spectral norm. We take the first $k$ days as data and forecast the next day, where $k=220,\cdots,240$. Every model is re-estimated and the new forecasts are generated based upon the new parameter estimates. Then, we take the average of errors during 21 periods to do the comparison. Moreover, the errors of the inverse of matrices are also compared. Table $\ref{t7}$ contains the results of the prediction error of two models using different norms. The main findings are as follows. \begin{enumerate} \item The diagonal CAW models with order $(p,q)=(1,1)$ performs the best among the CAW models as it has the smallest error under both Frobenius and spectral norms. In general, all CAW models have similar performance except the ones with order $(p,q)=(0,1)$. The result also indicates the possibility of over-parameterization with orders larger than $(1,1)$. \item The diagonal CAW models have slightly worse, but comparable performance with that of the VAR(1) model except the one with order $(p,q)=(0,1)$. \item The diagonal CAW model needs far less parameters than the VAR model does. Here, the best CAW model with order $(p,q)=(1,1)$ only needs 13 parameters, nearly a tenth of the number of parameters that VAR(1) model needs. \item We do predictions for 2 to 5 days ahead in addition to one-day forecast. We find that the performance of predictions of longer horizons is similar with that for the one day, in general. \end{enumerate} \section{Conclusions} \noindent In the literature, most models dealing with the realized covariance matrix focus on small number of assets, which become infeasible when the dimension is large. In order to solve the problem, we propose a factor model with diagonal CAW model fitted to the factor covariance matrix. Our model performs comparably with the VAR model while requiring far less parameters. For example, in the second data analysis, the CAW model with order $(p,q)=(1,1)$ performs similarly with the VAR model, measured both in the Frobenius norm and the spectral norm, but only needs nearly one tenth of the number of parameters of the latter. In addition, the model ensures the positive definiteness for the predicted covariance matrices. Diagnostic tests for the proposed model is worth considering in a future study.\\ \newpage \begin{table}[H] \centering \caption{Descriptive Statistics for Selected Realized Variances and Covariances \label{t1}} \caption*{We report the descriptive statistics of the realized variances and covariances of the first dataset, namely mean, maximum, minimum, standard deviation, skewness and kurtosis. We only show some entries due to limited space.} \begin{tabular}{|l*{6}{c}|} \hline Stock & Mean & Maximum & Minimum & SD & Skewness & Kurtosis \\ & $*10^{-5}$ & $*10^{-4}$ & $*10^{-5}$ & $*10^{-5}$ & & \\ \hline & \multicolumn{6}{c|}{Realized Variance} \\ \hline MMM & 3.25 & 1.27 & 0.63 & 1.96 & 1.85 & 4.84 \\ AXP & 6.90 & 6.05 & 2.03 & 7.02 & 4.74 & 31.0 \\ T & 4.85 & 1.36 & 1.29 & 2.60 & 1.15 & 0.85 \\ BA & 8.60 & 28.5 & 1.31 & 27.8 & 9.61 & 92.8 \\ CAT & 6.39 & 3.70 & 1.99 & 4.82 & 4.43 & 23.8 \\ \hline & \multicolumn{6}{c|}{Realized Covariance} \\ \hline MMM-AXP & 2.17 & 1.23 & -0.25 & 2.03 & 2.13 & 6.27 \\ MMM-T & 1.40 & 0.77 & -0.37 & 1.48 & 1.86 & 3.80 \\ MMM-BA & 1.94 & 0.95 & -2.33 & 1.85 & 1.36 & 2.67 \\ MMM-CAT & 2.11 & 1.08 & -0.03 & 1.72 & 2.23 & 7.01 \\ AXP-T & 1.96 & 0.96 & -0.66 & 1.95 & 1.70 & 3.15 \\ AXP-BA & 2.18 & 1.17 & -7.27 & 2.39 & 0.45 & 3.36 \\ AXP-CAT & 2.24 & 1.14 & -0.69 & 2.11 & 1.80 & 4.37 \\ T-BA & 1.38 & 0.86 & -1.47 & 1.59 & 1.95 & 5.33 \\ T-CAT & 1.41 & 0.84 & -1.13 & 1.61 & 1.80 & 3.95 \\ BA-CAT & 2.12 & 0.98 & -0.32 & 1.76 & 1.65 & 3.98 \\ \hline \end{tabular} \end{table} \newpage \begin{table}[H] \centering \caption{The Selection of the Order of VAR Model \label{t2}} \caption*{We fit the VAR model to the vectorized factor covariance matrix $\mathbf{\hat{\Sigma}}_f(t)$, which is a vector with 6 entries. Using the package "vars" in R, we select the VAR(1) model by Akaike information criterion (AIC), Hannan Quinn (HQ), Schwarz criterion (SC) and final prediction error (FPE). } \begin{tabular}{|l*{5}{c}|} \hline Order & 1 & 2 & 3 & 4 & 5 \\ \hline AIC(n) $*10^{2}$ & -1.107 & -1.102 & -1.098 & -1.094 & -1.090 \\ HQ(n) $*10^{2}$ & -1.102 & -1.093 & -1.084 & -1.077 & -1.068 \\ SC(n) $*10^{2}$ & -1.094 & -1.079 & -1.065 & -1.051 & -1.036 \\ FPE(n) $*10^{-48}$ & 0.830 & 1.313 & 2.181 & 3.268 & 5.772 \\ \hline \end{tabular} \end{table} \newpage \begin{table}[H] \centering \caption{Forecast errors for CAW and VAR models using different norms \label{t3}} \caption*{We report the results of the prediction accuracy of the two models using different norms. In addition, the prediction accuracy of the inverse of the matrices is shown as well. Here, FN is for the Frobenius norm and SN for the spectral norm. } \begin{normalsize} \begin{tabular}{|l*{5}{c}|} \hline & \multicolumn{5}{c|}{CAW} \\ \hline Order& Number of Parameters & FN & SN & FN & SN \\ && & & for Inverse & for Inverse\\ && $*10^{-4}$ & $*10^{-4}$ & $*10^{5}$ & $*10^{5}$ \\ \hline $p=0,q=1$ & 7 & 6.21 & 5.66 & 6.30 & 3.84 \\ $p=1,q=1$ & 10 & 4.77 & 3.95 & 6.31 & 3.84 \\ $p=1,q=2$ & 13 & 4.92 & 4.01 & 6.31 & 3.84 \\ $p=2,q=1$ & 13 & 4.96 & 4.01 & 6.31 & 3.84 \\ $p=2,q=2$ & 16 & 4.98 & 4.03 & 6.31 & 3.84 \\ \hline & \multicolumn{5}{c|}{VAR} \\ \hline & Number of Parameters & FN & SN & FN & SN \\ && & & for Inverse & for Inverse\\ && $*10^{-4}$ & $*10^{-4}$ & $*10^{5}$ & $*10^{5}$ \\ \hline VAR(1) & 42 & 4.81 & 3.99 & 6.31 & 3.84 \\ \hline \end{tabular} \end{normalsize} \end{table} \newpage \begin{table}[H] \centering \caption{Descriptive Statistics for Some Selected Realized Variances and Covariances \label{t4}} \caption*{We report the descriptive statistics of the realized variances and covariances of the second dataset, namely mean, maximum, minimum, standard deviation, skewness and kurtosis. We only show some entries due to limited space.} \begin{tabular}{|l*{6}{c}|} \hline Stock & Mean & Maximum & Minimum & SD & Skewness & Kurtosis \\ & $*10^{-5}$ & $*10^{-4}$ & $*10^{-5}$ & $*10^{-5}$ & & \\ \hline & \multicolumn{6}{c|}{Realized Variance} \\ \hline AIG & 17.4 & 9.16 & 3.05 & 11.2 & 2.47 & 13.1 \\ AXP & 6.12 & 6.38 & 1.45 & 4.71 & 7.67 & 91.5 \\ BA & 5.46 & 2.19 & 1.11 & 3.25 & 1.70 & 7.33 \\ C & 16.7 & 9.62 & 2.80 & 10.9 & 2.56 & 15.3 \\ \hline & \multicolumn{6}{c|}{Realized Covariance} \\ \hline AIG-AXP & 4.30 & 1.65 & -1.99 & 3.13 & 1.00 & 3.76 \\ AIG-BA & 3.32 & 1.32 & -2.74 & 2.82 & 1.03 & 3.93 \\ AIG-C & 7.72 & 4.54 & -1.47 & 5.99 & 1.85 & 9.37 \\ AXP-BA & 2.41 & 1.01 & -0.59 & 1.95 & 1.26 & 4.37 \\ AXP-C & 5.03 & 2.24 & -0.75 & 3.53 & 1.61 & 6.84 \\ BA-C & 3.69 & 1.75 & -1.40 & 3.13 & 1.66 & 6.30 \\ \hline \end{tabular} \end{table} \newpage \begin{table}[H] \centering \caption{The Selection of the Order of VAR Model \label{t5}} \caption*{We fit the VAR model to the vectorized factor covariance matrix $\mathbf{\hat{\Sigma}}_f(t)$, which is a vector with 10 entries. Using the package "vars" in R, we select the VAR(1) model as all the model selection criteria, namely Akaike information criterion (AIC), Hannan Quinn (HQ), Schwarz criterion (SC) and final prediction error (FPE), choose the order 1.} \begin{tabular}{|l*{5}{c}|} \hline Oder & 1 & 2 & 3 & 4 & 5 \\ \hline AIC(n) $*10^{2}$ & -1.949 & -1.943 & -1.937 & -1.933 & -1.932 \\ HQ(n) $*10^{2}$ & -1.939 & -1.925 & -1.910 & -1.898 & -1.888 \\ SC(n) $*10^{2}$ & -1.924 & -1.897 & -1.871 & -1.846 & -1.824 \\ FPE(n) $*10^{-84}$ & 0.234 & 0.416 & 0.796 & 1.292 & 1.751 \\ \hline \end{tabular} \end{table} \newpage \begin{table}[H] \centering \caption{Sparsity of $\mathbf{\hat{A}}_1$ \label{t6}} \caption*{We compute the values $\frac{1}{10^2}\sum_{i,j} |a_{i,j}|^m$ for different $0 \le m < 1$, where $\mathbf{\hat{A}}_1 = \{a_{i,j}\}_{1 \le i,j \le 10}$.} \begin{normalsize} \begin{tabular}{|l*{7}{c}|} \hline $m$ & $0$ & $0.05$ & $0.1$ & $0.15$ & $0.2$ & $0.25$ & $0.3$ \\ \hline $\frac{1}{10^2}\sum_{i,j} |a_{i,j}|^m$ & $1$ & $0.8686$ & $0.7586$ & $0.666$ & $0.5877$ & $0.5212$ & $0.4646$ \\ \hline \end{tabular} \end{normalsize} \end{table} \newpage \begin{table}[H] \centering \caption{Forecast errors for CAW and VAR models using different norms \label{t7}} \caption*{We report the results of the prediction accuracy of the two models using different norms. In addition, the prediction accuracy of the inverse of the matrices is shown as well. Here, FN is for the Frobenius norm and SN for the spectral norm.} \begin{normalsize} \begin{tabular}{|l*{5}{c}|} \hline & \multicolumn{5}{c|}{CAW} \\ \hline Order& Number of Parameters & FN & SN & FN & SN \\ && & & for Inverse & for Inverse\\ && $*10^{-4}$ & $*10^{-4}$ & $*10^{5}$ & $*10^{5}$ \\ \hline $p=0,q=1$ & 9 & 6.10 & 5.58 & 7.30 & 4.31 \\ $p=1,q=1$ & 13 & 5.27 & 4.80 & 7.31 & 4.31 \\ $p=1,q=2$ & 17 & 5.28 & 4.81 & 7.31 & 4.31 \\ $p=2,q=1$ & 17 & 5.28 & 4.83 & 7.31 & 4.31 \\ $p=2,q=2$ & 21 & 5.28 & 4.82 & 7.31 & 4.31 \\ \hline & \multicolumn{5}{c|}{VAR} \\ \hline & Number of Parameters & FN & SN & FN & SN \\ && & & for Inverse & for Inverse\\ && $*10^{-4}$ & $*10^{-4}$ & $*10^{5}$ & $*10^{5}$ \\ \hline VAR(1) & 110 & 5.18 & 4.76 & 7.31 & 4.30 \\ \hline \end{tabular} \end{normalsize} \end{table} \newpage \begin{figure}[H] \centering \includegraphics[width=15cm]{eigenvalue.png} \caption{Plots of the eigenvalues of $\mathbf{\bar{\hat{S}}}_x$ for the dataset when $k=98$. } \label{f1} \end{figure} \newpage \begin{figure}[H] \centering \includegraphics[width=16cm]{m1ts.png} \caption{Time series of variances and covariances for factor covariance matrices. } \label{f2} \end{figure} \newpage \begin{figure}[H] \centering \subfloat{\includegraphics[width=11.5cm]{m1fit1.png}}\, \subfloat{\includegraphics[width=11.5cm]{m1fit2.png}} \end{figure} \begin{figure}[H] \ContinuedFloat \centering \subfloat{\includegraphics[width=11.5cm]{m1fit3.png}}\, \subfloat{\includegraphics[width=11.5cm]{m1fit4.png}} \end{figure} \setcounter{figure}{4} \begin{figure}[H] \ContinuedFloat \centering \subfloat{\includegraphics[width=11.5cm]{m1fit5.png}}\, \subfloat{\includegraphics[width=11.5cm]{m1fit6.png}} \caption{The plot of fitted VAR(1) model, residuals of fitted VAR(1) model, and ACF and PACF of the residuals. } \label{f3} \end{figure} \newpage \setcounter{figure}{5} \begin{figure}[H] \centering \includegraphics[width=15cm]{AIG.png}\, \includegraphics[width=15cm]{BA.png} \end{figure} \begin{figure}[H] \ContinuedFloat \centering \includegraphics[width=15cm]{AIG-C.png}\, \includegraphics[width=15cm]{AXP-BA.png} \caption{The plot of selected realized variances and covariances. } \label{f4} \end{figure} \newpage \setcounter{figure}{4} \begin{figure}[H] \centering \includegraphics[width=15cm]{eigenvalue2.png} \caption{Plots of the eigenvalues of $\mathbf{\bar{\hat{S}}}_x$ for the dataset when $k=220$.} \label{f5} \end{figure} \newpage
{ "timestamp": "2015-04-15T02:06:33", "yymm": "1504", "arxiv_id": "1504.03454", "language": "en", "url": "https://arxiv.org/abs/1504.03454" }
\section{Introduction} Spin-charge coupled systems, which consist of itinerant electrons interacting with localized moments, are of special interest in the study of itinerant magnets for their rich physics from the interplay between magnetic moments and itinerant electrons. A key aspect that gives rise to the rich physics is the effective interactions between localized moments mediated by the itinerant electrons. In the weak coupling limit, the itinerant electrons induce long-ranged effective exchange interactions with oscillating signs.~\cite{Ruderman1954,Kasuya1956,Yosida1957} The weak coupling theory of effective spin interactions, so-called Ruderman-Kittel-Kasuya-Yosida (RKKY) interactions, has achieved a success in the study of metallic magnets, such as transition-metal~\cite{Kasuya1956} and rare-earth~\cite{Yosida1957} compounds and spin-glass behavior in alloys lightly doped with magnetic ions.~\cite{Binder1986} On the other hand, a strong ferromagnetic coupling between localized moments and itinerant electrons may arise in the transition metal systems due to the strong Hund's coupling. In the strong coupling limit, the spins of itinerant electrons are fully polarized along the localized moments. Such a situation is well described by the double-exchange (DE) model. In this model, the kinetic motion of electrons induces an effective ferromagnetic (FM) interaction between localized moments.~\cite{Zener1951,Anderson1955} This is called the DE interaction, which stabilizes a metallic FM state at low temperature. In these oxides, however, there generally exists antiferromagnetic (AFM) super-exchange (SE) interaction between the localized moments as well, and the competition of two interactions may give rise to nontrivial phenomena. One such example is found in perovskite manganese oxides. They are renown for the colossal magnetoresistance,~\cite{Tokura2000,Salamon2001,Dagotto2001} in which inhomogeneity in the competing region between FM and AFM phases have been studied in relation to the magnetoresistance.~\cite{Uehara1999,Moritomo1999,Moreo1999,Balagurov2001,Lai2010} While the simple picture based on the above argument appears to work well in the manganese oxides, recent numerical studies of DE models on geometrically frustrated lattices~\cite{Kumar2010,Venderbos2012,Ishizuka2013,Reja2015} have discovered emergence of intermediate phases in the competing region. In the case of checkerboard and triangular lattices, instabilities toward noncoplanar spin orderings were observed in the Monte Carlo (MC) simulation for the models with localized classical Heisenberg moments.~\cite{Kumar2010,Venderbos2012} Meanwhile, a thermally-induced intermediate phase with spontaneously-broken spatial inversion symmetry was found in the model with Ising moments on a pyrochlore lattice.~\cite{Ishizuka2013} These results imply that, in the frustrated systems, the subdominant interactions beyond the simple DE mechanism potentially give rise to the nontrivial phases in the phase competing region. Indeed, in the previous study by the authors,~\cite{Ishizuka2013} effective further-neighbor interactions derived from a strong coupling theory successfully predicted the presence of the intermediate phase. However, the method of the strong coupling expansion was not described in detail, and its accuracy has not been examined systematically. \begin{figure} \includegraphics[width=\linewidth]{fig01.eps} \caption{(Color online). Schematic picture of a pyrochlore lattice. Each figure indicates the two-spin exchange interactions, $J_l$, and the four-spin interactions, $K_l$ ($l$ is an integer). For the latter, four Ising spins at the two colored sites $\langle k, l \rangle$ and two black sites $\langle i,j \rangle$ interact with each other. The arrows on the sites in the right panel indicate anisotropy axes of the Ising moments, $\bm{n}_i$. See the text for details. } \label{fig:lattice} \end{figure} In this paper, we present the framework of the strong coupling theory for the effective spin interactions between the localized moments in DE models. We illustrate the technique for a spin-ice type DE model on a pyrochlore lattice (Fig.~\ref{fig:lattice}). We show that the second-order expansion gives rise to various four-spin interactions in addition to the two-spin interactions between second- and third-neighbor sites. The results for the effective spin model are compared with numerical results for the DE model, to test the accuracy of this theory. We find that the expansion up to second order correctly reproduces the trend of magnetic phases while changing the AFM SE interaction and the itinerant electron density. The organization of this paper is as follows. In Sec.~\ref{sec:m&m}, we introduce the model we consider, and elaborate the details of strong coupling expansion used here. In Sec.~\ref{sec:result}, we investigate the accuracy of this method by comparing the results with those of numerical diagonalization and MC simulation. In the last, an extension of our theory to Heisenberg and $XY$ localized moments is briefly discussed in Sec.~\ref{sec:heisenberg}. Section~\ref{sec:summary} is devoted to discussions and summary of this paper. \section{Model and Method}\label{sec:m&m} In this section, we introduce the model and method we used. In Sec.~\ref{ssec:m&m:model}, we introduce the DE model we consider in this paper. We explain the strong coupling expansion method in Sec.~\ref{ssec:m&m:method}. \subsection{Model}\label{ssec:m&m:model} The DE model we consider in this paper consists of itinerant electrons and classical localized moments that are strongly coupled to each other. The Hamiltonian is given in the general form~\cite{Zener1951,Anderson1955} \begin{eqnarray} H = - \sum_{i,j} \! t_{ij} c^\dagger_{i} c_{j} + \frac12\sum_{i,j}J_{ij}{\bm S}_i\cdot{\bm S}_j, \label{eq:HDE} \end{eqnarray} where $c_i$ ($c^\dagger_i$) is the annihilation (creation) operator of an itinerant electron and ${\bm S}_i$ is the localized moment at $i$th site. The electrons are described by spinless fermions, as their spins are perfectly polarized parallel to the localized spins. The first sum is the kinetic term of itinerant electrons. The transfer integral $t_{ij}$ depends on the relative position of $i$th and $j$th sites, and also depends on the localized spins at the two sites:~\cite{mHartmann1996} \begin{eqnarray} t_{ij}&=& t^\text{(bare)}_{ij} \left\{ \cos\frac{\theta_i}2\cos\frac{\theta_j}2\right.\nonumber\\ & &\left.\quad\qquad+ \sin\frac{\theta_i}2\sin\frac{\theta_j}2\exp[\mathrm{i}(\phi_j-\phi_i)] \right\}.\label{eq:transfer} \end{eqnarray} Here, $\theta_i$ and $\phi_i$ are the polar and azimuthal angles of ${\bm S}_i$, respectively; $t^\text{(bare)}_{ij}$ is the transfer integral between $i$th and $j$th sites in the absence of the coupling to localized spins. The second sum in Eq.~(\ref{eq:HDE}) is the AFM SE interaction term in which $J_{ij}$ is the exchange coupling between $i$th and $j$th sites. In this paper, we particularly consider the case of Ising local moments, i.e., ${\bm S}_i=\pm {\bm n}_i$ with ${\bm n}_i$ being a unit vector parallel to the anisotropy axis at $i$th site. \subsection{Strong coupling expansion}\label{ssec:m&m:method} To investigate the magnetic properties of the model in Eq.~(\ref{eq:HDE}), understanding the nature of effective spin interactions mediated by the coupled fermions is of crucial importance. To evaluate the effective spin interactions, we start by approximating the transfer integral in Eq.~(\ref{eq:transfer}) by its amplitude, \begin{eqnarray} \frac{\tilde{t}_{ij}}{|t^\text{(bare)}_{ij}|}= \left|\frac{t_{ij}}{t^\text{(bare)}_{ij}}\right| = \sqrt{\frac{1+\cos\theta_{ij}}2}\label{eq:transfer2}, \end{eqnarray} where $\theta_{ij}$ is the angle between $\bm{S}_i$ and $\bm{S}_j$.~\cite{note_nophase} The DE model with this approximated form of the transfer integral, $\tilde{t}_{ij}$, was used to study the magnetic properties of manganese oxides in several previous works.~\cite{deGennes1959,Kubo1972,Millis1995,Calderon1998} Using this approximation, and considering that the localized moments are of Ising type, we can rewrite the transfer integral as \begin{eqnarray} \tilde{t}_{ij}= t_{ij}^0 + t_{ij}^1 \tilde{S}_i \tilde{S}_j \label{eq:tilde_tij} \end{eqnarray} where $\tilde{S}_i=\bm{S}_i \cdot \bm{n}_i =\pm1$ is the projected spin parameter along ${\bm n}_i$, and \begin{eqnarray} t_{ij}^{0} &=& \frac{|t_{ij}^\text{(bare)}|}2 \left( \cos\frac{\theta^0_{ij}}2 + \sin\frac{\theta^0_{ij}}2 \right),\\ t_{ij}^{1} &=& \frac{|t_{ij}^\text{(bare)}|}2 \left( \cos\frac{\theta^0_{ij}}2 - \sin\frac{\theta^0_{ij}}2 \right), \label{eq:t1ij} \end{eqnarray} are real coefficients. Here, $\theta^0_{ij}$ is the relative angle between ${\bm n}_i$ and ${\bm n}_j$. We consider the hopping term with the coefficient $t_{ij}^0$ as the unperturbed Hamiltonian, $H_0$, and perform the expansion of Matsubara Green's function with respect to the remaining term with $t_{ij}^1$, $H_1$. The Dyson equation is given by \begin{align} G_{i,j}({\rm i}\omega)=g_{i,j}({\rm i}\omega)-\sum_{k,l} g_{i,k}({\rm i}\omega) \left[\,t_{kl}^1\tilde{S}_{k} \tilde{S}_{l}\,\right] G_{l,j}({\rm i}\omega).\label{eq:Dyson} \end{align} Here, $G_{i,j}({\rm i}\omega)$ is Matsubara Green's function and $g_{i,j}({\rm i}\omega)$ is bare Green's function of the unperturbed Hamiltonian. The term in the square bracket in Eq.~(\ref{eq:Dyson}) is the scattering by $H_1$. The internal energy of the system is given by \begin{eqnarray} E = - \sum_{i,j} \! \tilde{t}_{ij} \langle c^\dagger_{i}c_{j}\rangle+\frac12\sum_{i,j} \! J_{ij} {\bm S}_{i}\cdot{\bm S}_{j}. \label{eq:Eint} \end{eqnarray} By replacing $\langle c^\dagger_{i}c_{j}\rangle$ by $\sum_\omega G_{j,i}({\rm i}\omega)e^{{\rm i}\omega(-0)}$ and expanding Green's functions using the Dyson equation, one obtains the effective spin model: the energy in Eq.~(\ref{eq:Eint}) gives the effective Hamiltonian for the Ising spins $\tilde{S}_i$. In this paper, we consider the expansion up to $O[(t_{ij}^1)^2]$, which leads to effective four-spin interactions in addition to two-spin ones. Regarding to the accuracy of this method, we note that in Eq.~(\ref{eq:t1ij}), $|t_{ij}^1|$ becomes small when $\theta^0_{ij}$ is close to $\pi/2$. Hence, it is expected that the perturbation is expected to be accurate when $\theta^0_{ij} \sim \pi/2$, namely, the local anisotropy axes are perpendicular to each other. On the other hand, the approximation becomes less accurate as we approach the collinear case, $t_{ij}^{0}=t_{ij}^{1}$. In the following sections, we test this method for a DE model on a pyrochlore lattice with only nearest-neighbor (NN) transfer integrals and the localized Ising moments having spin-ice type anisotropy.~\cite{Harris1997,Ramirez1999} In this model, the anisotropy axes of two NN spins have the relative angle of $\theta_{ij}^0\sim109^\circ$, which is close to $\pi/2$. \section{Results}\label{sec:result} In this section, as the benchmark of the method in the previous section, we study the effective spin interactions in a spin-ice type DE model on a pyrochlore lattice. In Sec.~\ref{ssec:result:sct}, we present the effective spin model obtained from the strong coupling expansion. The accuracy of this method is investigated by comparing the ground state energy (Sec.~\ref{ssec:result:compete}) and magnetic phase diagram for $n$ and $J$ (Sec.~\ref{ssec:result:variational}). The later is obtained by a variational method. The relevance of the phase diagram is further investigated in Sec.~\ref{ssec:result:mc} by a MC method. \subsection{Effective spin interactions} \label{ssec:result:sct} \begin{figure} \includegraphics[width=0.92\linewidth]{fig02.eps} \caption{(Color online). \label{fig:JsKs} Effective spin interactions mediated by itinerant electrons: (a) two-spin and (b) four-spin interactions as functions of the electron density $n$. The definition of interactions is given in Fig.~\ref{fig:lattice}. } \label{fig:interaction} \end{figure} To test the accuracy of the strong coupling theory, we consider an Ising spin DE model on a pyrochlore lattice with hoppings and SE interactions for NN sites only. The anisotropy axes for Ising moments are defined along local $[111]$ axes: ${\bm n}_\text{a}=(1/\sqrt3)(1,-1,-1)$, ${\bm n}_\text{b}=(1/\sqrt3)(-1,1,-1)$, ${\bm n}_\text{c}=(1/\sqrt3)(-1,-1,1)$, and ${\bm n}_\text{d}=(1/\sqrt3)(1,1,1)$ (see Fig.~\ref{fig:lattice}). For simplicity, we replace the NN hopping integral $t_{ij}$ by $\tilde{t}_{ij}$ as in Eq.~(\ref{eq:tilde_tij}); the starting Hamiltonian is given by \begin{eqnarray} H = - \sum_{\langle i,j\rangle} \! \tilde{t}_{ij}(c^\dagger_{i} c_{j} + \text{H.c.})+J\sum_{\langle i,j\rangle}{\bm S}_i\cdot{\bm S}_j, \label{eq:HDE2} \end{eqnarray} where the sum $\langle i,j\rangle$ is taken over the NN sites on the pyrochlore lattice. Hereafter, we take the bare hopping for NN sites, $t=1$, as the energy unit. Applying the strong coupling expansion in Sec.~\ref{ssec:m&m:method} to this model, we construct an effective Ising model on the pyrochlore lattice, whose Hamiltonian is given by \begin{align} H_\text{eff} &= \sum_{i,j} \! J_{ij} \tilde{S}_i \tilde{S}_j + \frac12\sum_{\substack{\langle i,j\rangle,\langle k,l\rangle,\\ i, j\ne k, l}}\!K_{\langle i,j\rangle,\langle k,l\rangle}\tilde{S}_i\tilde{S}_j\tilde{S}_k\tilde{S}_l\nonumber \\ &-\frac{J}3\sum_{\langle i,j\rangle} \! \tilde{S}_i \tilde{S}_j,\label{eq:Heff} \end{align} up to a constant given by the contribution from $H_0$. Here, the first term represents effective two-spin interactions for nearest-, second- and third-neighbor sites, while the second term describes the four-spin interactions. The last term is the AFM SE interaction already present in Eq.~(\ref{eq:HDE2}), where the coefficient $-1/3$ comes from the projection of the Ising spins to the anisotropy axes at each site. In the model in Eq.~(\ref{eq:HDE2}), the leading order in expansion gives NN two-spin interaction while the second order gives four spin interactions between spins on the edge of two bonds at arbitrary distance. The second and third neighbor two-spin interactions also arise from the second order expansion, when the two bonds share a site, e.g., $j=k$. In general, the two-spin interactions between sites with $l$ Manhattan distance arise from the $l$th order in expansion, i.e., they decay exponentially with distance in contrast to the RKKY interaction. Figure~\ref{fig:interaction} shows the effective spin interactions that arise from the expansion for the model in Eq.~(\ref{eq:HDE2}) as functions of the itinerant electron density $n=\sum_i\langle c_i^\dagger c_i \rangle/N$ ($N$ is the number of sites). We here show the results for $1/4 < n < 1/2$, as the higher order terms appear to be relevant for lower $n$ (see below) and the flat band may lead to some complexity for higher $n$. The definition of each interaction is shown in Fig.~\ref{fig:lattice}. As shown in Fig.~\ref{fig:interaction}, the most dominant interaction mediated by itinerant fermions is the FM NN interaction $J_1$, consistent with what is expected in DE models. This dominant interaction gives rise to instability toward a FM state when the SE interaction $J$ between localized moments is sufficiently weak. In addition, we also have other two-spin and four-spin interactions which are an order of magnitude smaller than $J_1$. These interactions potentially become important in the phase competing region between DE-driven FM and SE driven AFM phases, i.e., when the SE interaction cancels the NN FM interaction. Some of these subdominant interactions are also plotted in Fig.~\ref{fig:interaction}. In this density range, most of the four-spin interactions decay rapidly with distance; $K_0$ is the interaction of four spins on the same tetrahedron, while $K_i$ ($i\ne0$) are interactions between spins with further distance. As a consequence, the dominant interactions are $J_2$, $J_3$, and $K_0$ for electron densities $n\gtrsim1/4$. The result implies that, for $n\gtrsim1/4$, only considering a limited number of interactions are sufficient in reproducing the qualitative nature of the model. \subsection{Numerical diagonalization} \label{ssec:result:compete} \begin{figure} \includegraphics[width=\linewidth]{fig03.eps} \caption{(Color online). $n$ dependence of the ground state energy for different spin configurations at $J=0$. The lines are the results obtained by numerical diagonalization of the DE model in Eq.~(\ref{eq:HDE2}), and the symbols are those for the effective spin model in Eq.~(\ref{eq:Heff}) obtained from the strong coupling expansion. See the text for details. } \label{fig:Egs} \end{figure} To examine the accuracy of above theory, we start by evaluating the ground state energy. The results are compared to those of numerical diagonalization for several different spin configurations. We here consider $q=0$ orders with all spins on a tetrahedron pointing inward or outward (all-in/all-out; AIAO) and two spins inward and two spins outward (two-in two-out; 2I2O) as an example of the magnetic order with AFM and FM NN correlation, respectively. In addition to these phases, two magnetic orders that were found in the related Kondo lattice models, 32-sublattice (32-sub) order~\cite{Ishizuka2012} and spin cluster (SC) order,~\cite{Ishizuka2013} are also considered. Figure~\ref{fig:Egs} shows the result of the ground state energy calculated for different magnetic orders in the effective spin model in Eq.~(\ref{eq:Heff}) by taking into account $J_1$, $J_2$, $J_3$, and $K_0$. For $n\gtrsim1/4$, the results of strong coupling expansion are in accordance with those of numerical diagonalization for the model in Eq.~(\ref{eq:HDE2}). As the NN FM interaction $J_1$ gives the largest contribution to the ground state energy, this result shows that the estimate of $J_1$ is successful in the strong coupling expansion. We also calculated the ground state energy for $n\lesssim1/4$. In this region, however, the results show strong deviation from the result of numerical diagonalization (not shown here). This is presumably due to the presence of longer-range interactions than second- and third-neighbors which are more dominant than the case for $n\gtrsim1/4$. \subsection{Variational phase diagram} \label{ssec:result:variational} \begin{figure} \includegraphics[width=\linewidth]{fig04.eps} \caption{(Color online). $J$ dependence of the ground state energy for different spin configurations calculated for the effective spin model in Eq.~(\ref{eq:Heff}) obtained from the strong coupling theory [(a) and (b)] and the DE model in Eq.~(\ref{eq:HDE2}) by numerical diagonalization [(c) and (d)]. The energies are measured from those for the 2I2O state. The electron density were set at $n=0.3$ ($0.4$) for (a) and (c) [(b) and (d)]. } \label{fig:Egs2} \end{figure} \begin{figure*} \includegraphics[width=\linewidth]{fig05.eps} \caption{(Color online). Variational phase diagrams for (a) the effective spin model obtained by the strong coupling expansion [Eq.~(\ref{eq:Heff})], (b) the DE model in Eq.~(\ref{eq:HDE2}), and (c) the DE model in Eq.~(\ref{eq:HDE}). The phase diagram in (a) is calculated by only considering $J_1$, $J_2$, $J_3$, and $K_0$. } \label{fig:Jdiag} \end{figure*} Next, to evaluate the effect from further-neighbor two-spin interactions and four-spin interactions in Eq.~(\ref{eq:Heff}), we study the ground state phase diagram by a variational calculation with changing the AFM SE interaction $J$. As $J$ tends to cancel the effective NN FM interaction from the DE mechanism, this essentially corresponds to studying the effect of subdominant interactions that arise from the second order expansion in the strong coupling theory. For this purpose, we here perform a variational calculation comparing the ground state energy for different spin configurations: 2I2O, 32-sub, SC, and AIAO orders.~\cite{note_variational} Figure~\ref{fig:Egs2} shows the results of ground state energy for different magnetic orders measured from that for the 2I2O state. Figure~\ref{fig:Egs2}(a) shows the result for the effective spin model in Eq.~(\ref{eq:Heff}) obtained by the strong coupling theory at $n=0.3$. We find all four phases with increasing $J$; the transition takes place from the 2I2O to 32-sub state at $J\sim0.109$, to SC state at $J\sim0.121$, and to AIAO state at $J\sim0.157$. A similar trend is also found in the result calculated by numerical diagonalization of the model in Eq.~(\ref{eq:HDE2}) [Fig.~\ref{fig:Egs2}(c)]. Note that the electronic phase separation is not considered here; see Fig.~\ref{fig:Jdiag}. In the results for $n=0.4$, the strong coupling theory predicts four magnetic phases in the ground state, as shown in Fig.~\ref{fig:Egs2}(b). On the other hand, as shown in Fig.~\ref{fig:Egs2}(d), there are only three phases for the model in Eq.~(\ref{eq:HDE2}): 2I2O, 32-sub, and AIAO phases. Performing the above variational calculation while varying $n$ and $J$, we map out the ground state phase diagram. Figure~\ref{fig:Jdiag} shows the variational phase diagrams calculated for the effective spin model and the DE models. Figure~\ref{fig:Jdiag}(a) shows the result for the effective spin model constructed from the strong coupling theory [Eq.~(\ref{eq:Heff})]. For all $1/4<n<1/2$, we found both SC and 32-sub phases between the 2I2O and AIAO phases. On the other hand, the DE model shows a slightly different phase diagram. Figure~\ref{fig:Jdiag}(b) shows the phase diagram for the model in Eq.~(\ref{eq:HDE2}). In Fig.~\ref{fig:Jdiag}(b), we found only the 32-sub state in the intermediate range of $J$ for $0.321\lesssim n\lesssim 0.466$, while only the SC state appears for $n\lesssim0.288$ and $n\gtrsim0.471$; the two phases appear in the narrow range of $0.288\lesssim n\lesssim 0.321$ successively while increasing $J$. Between these phases as well as to the 2I2O and AIAO phases, the system exhibits an electronic phase separation, which is due to the discontinuity in $n$ associated with the first-order magnetic phase transition. The result indicates that, with varying $n$, there are two regions in the phase diagram where the phase competing region is dominated by either 32-sub or SC order. This result is in contrast to the strong coupling theory, where the two phases appear for all $n$. Nevertheless, the strong coupling theory predicts the trend of instability toward the correct intermediate ground states found in the DE model. In the last, we discuss the variational phase diagram for the DE model in Eq.~(\ref{eq:HDE}), without approximating $t_{ij}$ by $\tilde{t}_{ij}$ in Eq.~(\ref{eq:tilde_tij}). The result is shown in Fig.~\ref{fig:Jdiag}(c). The SC phase appears only for $n\lesssim0.25$, while the 32-sub phase appears for $0.323\lesssim n\lesssim0.418$; no intermediate phase is found for $0.25\lesssim n \lesssim 0.323$ and $n\gtrsim0.418$. Hence, despite the absence of quantum phase in the hopping term, the two DE models show qualitatively similar ground state phase diagrams. This is a crucial observation for justifying the approximation we used in this paper, as we ignored the effect of the quantum phase in the strong coupling theory. \subsection{Monte Carlo simulation} \label{ssec:result:mc} For further comparison, we study the models in Eq.~(\ref{eq:HDE}) using the unbiased MC method, which allows calculating thermodynamic quantities of the model in Eq.~(\ref{eq:HDE}) without any approximations.~\cite{Yunoki1998} This method and its variants~\cite{Motome1999,Furukawa2004} have recently been used to explore unconventional phases in the phase competing region of the DE models on frustrated lattices.~\cite{Kumar2010,Venderbos2012,Ishizuka2013,Reja2015} The calculations were done with the system size $N=4\times N_{\rm s}$ with $N_{\rm s}=4^3$ under the periodic boundary conditions. Thermal averages of physical quantities were calculated for typically 3600 MC steps after 600 steps for thermalization. Some of the low-temperature data were calculated for longer MC steps up to 10400 steps. We divided the MC measurements into five bins and estimated the statistical errors by the standard deviations among the bins. \begin{figure} \includegraphics[width=.8\linewidth]{fig06.eps} \caption{(Color online). Results of Monte Carlo simulation for the DE model in Eq.~(\ref{eq:HDE}) at $n\sim0.38$: (a) $\rho_{22}$, (b) $\rho_{40}$, and (c) $S({\bm q})/N$. See the text for details. } \label{fig:mc} \end{figure} MC results for both DE and effective spin models at $n=0.25$ have already been published by the authors,~\cite{Ishizuka2013} which are in accordance with the variational phase diagram in Fig.~\ref{fig:Jdiag}(c). In the DE model, only the SC state was found at the lowest temperature while increasing $J$, as the ground state in intermediate region. In addition, a fluctuating SC state with spatial inversion symmetry breaking was obtained in the intermediate temperature range above the critical temperature of the SC order. However, we did not find the 32-sub order in accordance with the variational phase diagram. To check the existence of the 32-sub order in the region $n>1/4$, we here performed the MC simulation for the DE model in Eq.~(\ref{eq:HDE}). Figure~\ref{fig:mc} shows the result of MC simulation for $N=4\times 4^3$ sites at $n=0.38$. Figures~\ref{fig:mc}(a) and \ref{fig:mc}(b) show ratio of tetrahedra with two-in two-out ($\rho_{22}$) and all-in/all-out ($\rho_{40}$) configurations. When $J=0.06$, the results show enhancement of $\rho_{22}$ and decrease of $\rho_{40}$ with decreasing temperature. This is a sign of the FM correlation for the NN bonds.~\cite{dHertog2000,Bramwell2001} However, we could not reach the transition temperature for the long-range order in our simulation due to the freezing at lower temperatures. In contrast, for $J=0.22$, $\rho_{40}$ increases with decreasing temperature, approaching $\rho_{40}\to1$. In addition, the structure factor \begin{eqnarray} S_\alpha({\bm q}) = \frac1N\sum_{i,j\in\alpha} \langle {\bm S}_i\cdot{\bm S}_j\rangle \exp[{\rm i}{\bm q}\cdot({\bm r}_i-{\bm r}_j)] \end{eqnarray} for ${\bm q}=\bm{0}$ shows increase from zero as shown in Fig.~\ref{fig:mc}(c), indicating the AIAO order in the ground state. On the other hand, the result for $J=0.14$ shows $\rho_{22}\to0.375$ and $\rho_{40}\to0.125$ with decreasing temperature [Figs.~\ref{fig:mc}(a) and \ref{fig:mc}(b)]. This is a sign of the phase transition to 32-sub order, where $6/16$ ($2/16$) tetrahedra are in 2I2O (AIAO) spin configuration.~\cite{Ishizuka2012} This transition is also confirmed by the spin structure factor for ${\bm q}=(\pi,\pi,\pi)$ plotted in Fig.~\ref{fig:mc}(c). From these results, we conclude that, a sequence of states, 2I2O, 32-sub, and AIAO, appears for $n=0.38$ with increasing $J$. On the other hand, at $n=0.25$, the presence of SC order in the intermediate $J$ region was previously reported,~\cite{Ishizuka2013} consistently with the variational phase diagram in Fig.~\ref{fig:Jdiag}(c). The results support that the effective spin model by the strong coupling theory can predict the trend of intermediate phases in the competing region in the DE model. \section{ Extension to Heisenberg and $XY$ Moments }\label{sec:heisenberg} In the last, we briefly discuss potential extension of this method to DE models with Heisenberg or $XY$ type localized moments. In these cases, we cannot use the expansion in Eq.~(\ref{eq:tilde_tij}). One alternative approach is to replace $\cos\theta_{ij}$ in Eq.~(\ref{eq:transfer}) by ${\bm S}_i\cdot{\bm S}_j$, and consider $\tilde{t}_{ij}-\alpha t_{ij}^\text{(bare)}$ as the perturbation, where $0<\alpha<1$. In the leading order, this gives FM NN interaction of the form $|J_{ij}|\propto\tilde{t}_{ij}$; the classical spin model with $J_{ij}$ as the NN interaction was studied motivated by the DE models.~\cite{Caparica2000} Another route to evaluate the effective interactions is to use an asymptotic expansion, for instance, Taylor series \begin{eqnarray} \tilde{t}_{ij} &=&\frac{t}{\sqrt{2}}\sqrt{1+{\bm S}_i\cdot{\bm S}_j},\\ &=&\frac{t}{\sqrt{2}}\left\{1+\frac{{\bm S}_i\cdot{\bm S}_j}2-\frac{({\bm S}_i\cdot{\bm S}_j)^2}8+\cdots\right\}. \end{eqnarray} This expansion naturally predicts the presence of positive biquadratic interaction as the subleading interaction. Recently, the positive biquadratic interaction has been proposed~\cite{Akagi2012} as the possible origin of non-coplanar phases found in the weak coupling region of a triangular Kondo lattice model.~\cite{Martin2008,Akagi2010,Kato2010} The same non-coplanar state has also been found in the DE limit, but its mechanism have not been explained so far.~\cite{Kumar2010} A simple argument based on our theory suggests that the non-coplanar phase in the DE limit may also be stabilized by the biquadratic interaction that arises from the strong coupling expansion. This also implies that, in the DE models with Heisenberg moments, even $O(t_{ij}^1)$ in the expansion may give rise to unconventional magnetism. \section{Discussions and Summary}\label{sec:summary} To summarize, in this paper, we studied the fermion-mediated effective spin interaction in an Ising spin double-exchange model on a pyrochlore lattice. To evaluate the effective interactions from microscopic theory, we used a strong coupling expansion and calculated the effective interactions up to second order in terms of the spin-dependent electron hopping. We showed that, effective four-spin interactions appear in the second order expansion. We also found that the effective interactions are limited to short range for this model, at least, for electron density $1/4\le n \le 1/2$. Focusing on this region, we studied the accuracy of strong coupling theory. From comparison to the numerical results on the double-exchange model, we found the strong coupling theory gives a good estimate of the ground state energy. In addition, we studied the ground state phase diagram in the presence of antiferromagnetic super-exchange interactions between the localized moments. The calculations were done by a variational method, and some of the results were also confirmed by a Monte Carlo simulation. We found that the strong coupling method correctly captures the trend of intermediate phases found in the double-exchange models. It is interesting that the expansion up to second order appears to capture the correct trend of the DE model, although the energy scale we are discussing is very small, $E/t\sim0.01$. This may be related to the fact that the ground state of a spin model is often sensitive only to the sign and not to the magnitude of interactions. For the effective spin model we studied, the 32-sublattice and spin-cluster states appear in the region of both second- and third-neighbor exchange interactions being antiferromagnetic.~\cite{Ishizuka2013} Hence, although the second order in expansion is not sufficient in correctly predicting which phase wins, it is still successful in guessing the correct candidates for the intermediate phase. This arguments cast a possible restriction on the application of the strong coupling theory we proposed; the strong coupling theory predicts correct trend only when the double-exchange model has a few subdominant interactions that are relevant, in addition to the ferromagnetic nearest-neighbor interaction. However, if this condition is satisfied, we can expect that the theory gives the correct trend. \acknowledgements The authors thank N. Furukawa and M. Udagawa for fruitful discussions. This research was supported by KAKENHI (No. 24340076), the Strategic Programs for Innovative Research (SPIRE), MEXT, and the Computational Materials Science Initiative (CMSI), Japan. HI is supported by JSPS Postdoctoral Fellowships for Research Abroad.
{ "timestamp": "2015-04-14T02:11:15", "yymm": "1504", "arxiv_id": "1504.03069", "language": "en", "url": "https://arxiv.org/abs/1504.03069" }
\section*{Motivation} A motivation for studying symplectic Lefschetz fibrations is that, in nice cases, they occur as mirror partners of complex varieties. In fact, given a complex variety $Y$, the Homological Mirror Symmetry (HMS) conjecture of Kontsevich \cite{Ko} predicts the existence of a symplectic mirror partner $X$ with a superpotential $W \colon X \to \mathbb C$. For Fano varieties, the statement of the HMS includes the following: {\it The category of A-branes $D(\Lag (W ))$ is equivalent to the derived category of B-branes (coherent sheaves) $D^b (\Coh(X))$ on $X$.} Here $D(\Lag (W))$ is the directed Fukaya--Seidel category of vanishing cycles for the symplectic manifold $X$ and $D^b (\Coh(Y))$ is the bounded derived category of coherent sheaves on $Y$. An exciting feature of the conjecture is that the A-side is symplectic whereas the B-side is algebraic, and therefore the conjecture provides a dictionary between the two types of geometry -- algebraic and symplectic -- the mirror map interchanging vanishing cycles on the symplectic side with coherent sheaves on the algebraic side. HMS has been described in several cases: elliptic curves \cite{PZ}, curves of genus two \cite{Se1}, curves of higher genus \cite{E}, punctured spheres \cite{AAEKO}, weighted projective planes and del-Pezzo surfaces \cite{AKO1}, \cite{AKO2}, quadrics and intersection of two quadrics \cite{S}, the four torus \cite{AbS}, Calabi--Yau hypersurfaces in projective space \cite{Sh}, toric varieties \cite{Ab}, Abelian varieties \cite{F}, hypersurfaces in toric varieties \cite{AAK}, varieties of general type \cite{GKR}, and non-Fano toric varieties \cite{BDFKK}. Nevertheless, the HMS conjecture remains open in most cases. The B-side of the conjecture is better understood in the sense that a lot is known about the category of coherent sheaves on algebraic varieties. In particular, in the Fano and general type cases, the famous reconstruction theorem of Bondal and Orlov says that you can recover the variety from its derived category of coherent sheaves \cite{BO}. In contrast the A-side is rather mysterious. The intent of this paper is to contribute to the understanding of LG models and subsequently to their categories of vanishing cycles. Using Lie theory, we construct LG models $(\mathcal{O}(H_0),f_H))$, where $\mathcal{O}(H_0)$ is the adjoint orbit of a complex semisimple Lie group and $f_H$ is the height function with respect to an element of the Cartan subalgebra (see Theorem \ref{thm-prin-1}). Even though we had HMS as an encouragement to pursue our work, we do not attempt to prove any instance of it, rather we endeavour to contribute to the understanding of the A-side of the conjecture by describing examples of symplectic Lefschetz fibrations in arbitrary dimensions. We calculate the directed Fukaya--Seidel category in the first nontrivial example, namely the adjoint orbit of ${\mathfrak{sl}(2,\mathbb{C})} $. For the case of ${\mathfrak{sl}(3,\mathbb{C})} $ orbits, we discuss (the wild) variations of Hodge diamonds depending on choices of compactifications for our Lefschetz fibrations. Acknowledgement: It is a pleasure to thank Denis Auroux for suggesting corrections and improvements to the text. E.~Gasparim and L.~Grama were supported by Fapesp under grant numbers 2012/10179-5 and 2012/21500-9, respectively. L.~A.~B.~San Martin was supported by CNPq grant n$^{\mathrm{o}}$ 304982/2013-0 and FAPESP grant n$^{\mathrm{o}}$ 2012/17946-1. \section{Definitions} \begin{definition} A \emph{holomorphic Morse function} on a manifold $X$ is a holomorphic function $f \colon X \to \mathbb{P}^1$ (or $f \colon X \to \mathbb C$) which has only non-degenerate critical points. \end{definition} \begin{definition} \label[definition]{def:TLF} Let $X$ be a complex manifold of dimension $n$ and $f \colon X \to \mathbb{P}^1$ (or $f \colon X \to \mathbb C$) a surjective holomorphic fibration. We say that $f$ is a \emph{topological Lefschetz fibration} if \begin{enumerate} \item \label{item:TLFCritPts} there are finitely many critical points $p_1, \dotsc, p_k$, and $f (p_i) \neq f (p_j)$ for $i \neq j$; \item \label{item:TLFLocMorse} for each critical point $p$, there are complex neighbourhoods $p \in U \subset X $, $f (p) \in V \subset \mathbb{P}^1$ on which $f_{\vert U}$ is represented by the holomorphic Morse function \[ f_{\vert U} (z_1, \dots, z_n) = z_1^2 + \dotsb + z_n^2, \] and such that $\crit f \cap U = \set{p}$; and \item \label{item:TLFlocTriv} the restriction $f_{\reg} := f \vert_{X - \bigcup X_i}$ to the complement of the singular fibres $X_i$ is a locally trivial fibre bundle. \end{enumerate} \end{definition} \begin{definition} \label[definition]{def:SLF} Let $X$ be a complex manifold and $\omega$ a symplectic form making $(X, \omega)$ into a symplectic manifold. We say that a topological Lefschetz fibration is a \emph{symplectic Lefschetz fibration} if \begin{enumerate} \item \label{item:SLFFibres} the smooth part of any fibre is a symplectic submanifold of $(X, \omega)$; and \item \label{item:SLFTanCones} for each critical point $p_i$, the form $\omega_{p_i}$ is non-degenerate on the tangent cone of $X_i$ at $p_i$. \end{enumerate} \end{definition} \section{Non-examples} \begin{proposition} Let $M$ be a compact complex manifold with odd Euler characteristic, then $M$ does not fibre over $\mathbb P^1$. \end{proposition} \begin{proof} The Euler characteristic is multiplicative, that is, for such a fibration we would have $\chi(M)$ =$\chi( \mathbb P^1) \cdot \chi(F)$, but $\chi( \mathbb P^1)=2$. \end{proof} \begin{corollary}\cite[cor 2.19]{C} There are no topological fibrations $f \colon \mathbb P^{2n} \to \mathbb P^1$ for $n > 1$. \end{corollary} \begin{proposition} There are no algebraic fibrations $f \colon \mathbb P^{n} \to \mathbb P^1$ for $n > 1$. \end{proposition} \begin{proof} Fibres of such a fibration would divisors be in $\mathbb P^n$, but by Bezout theorem any two divisors in $\mathbb P^n$ intersect. \end{proof} \section{Good examples in dimension 4} In 4 (real) dimensions, every symplectic manifold admits a Lefschetz fibration after blowing up finitely many points. This is the celebrated result of Donaldson \cite{Do}: {\it For any symplectic 4-manifold $X$, there exists a nonnegative integer $n$ such that the $n$-fold blowup of $X$, topologically $X \#n \mathbb{CP}^2$, admits a Lefschetz fibration $f \colon X \#n \mathbb{CP}^2 \to S^2$. } In the opposite direction, still in 4D, the existence of a topological Lefschetz fibration on a symplectic manifold guarantees the existence of a symplectic Lefschetz fibration whenever the fibres have genus at least 2 \cite{GoS}: {\it If a 4-manifold $X$ admits a genus $g$ Lefschetz fibration $f \colon X \to \mathbb C$ with $g \ge 2$, then it has a symplectic structure.} Moreover, the existence of 4D symplectic Lefschetz fibrations with arbitrary fundamental group is guaranteed by \cite{ABKP}: {\it Let $\Gamma$ be a finitely presentable group with a given finite presentation $a \colon \pi_g \to \Gamma$. Then there exists a surjective homomorphism $b \colon \pi_h \to \pi_g$ for some $h \ge g$ and a symplectic Lefschetz fibration $f \colon X \to S^2$ such that the regular fibre of $f$ is of genus $h$, $\pi_1 (X) = \Gamma$, and the natural surjection of the fundamental group of the fibre of $f$ onto the fundamental group of $X$ coincides with $a \circ b$.} In general it is possible to construct Lefschetz fibrations in 4D starting with a Lefschetz pencil and then blowing up its base locus (see \cite{Se2}, \cite{Se3} \cite{Go}). However, in such cases one needs to fix the indefiniteness of the symplectic form over the exceptional locus by glueing in a correction. Direct constructions of Lefschetz fibrations in higher dimensions are by and large lacking in the literature. This gave us our first motivation to investigate the existence of symplectic Lefschetz fibrations on complex $n$-folds with $n \ge 3$. Our construction does not make use of Lefschetz pencils, we construct our symplectic Lefschetz fibrations directly by taking the height functions that come naturally from the Lie theory viewpoint. \section{A caveat about the norm of complex Morse functions} It is sometimes claimed in the literature that $\abs{f}^2$ is a real Morse function whenever $f$ is a Lefschetz fibration. However, this is in general false. We state this fact as a lemma. \begin{lemma} Let $X$ be a complex manifold of dimension, $f\colon X \rightarrow \mathbb C$ a Lefschetz fibration and let $p$ be a critical point of $f$. Then $p$ is a degenerate critical point of $|f-f(p)|^2$. \end{lemma} \begin{proof} We may choose (complex) charts centred at $p$ such that with respect to this coordinate system $f(z_1, \dotsc, z_n)-f(p) = \sum_{i=1}^n z_i^2$. Hence, is it enough to consider the standard Lefschetz fibration $g\colon \mathbb C^n \rightarrow \mathbb C$ given by $g(z_1, \dotsc, z_n) = \sum_{i=1}^n z_i^2$, and to prove that $0$ is a degenerate critical point of $|g|^2$. In real coordinates \begin{align*} z := (z_1, \dotsc, z_n) &\mapsto \sum_{i=1}^n z_i^2 = \sum_{i=1}^n x_i^2 - y_i^2 + 2\sqrt{-1} x_iy_i, \end{align*} where we have written $z_i = x_i + \sqrt{-1}y_i$. Then we have the function \begin{align*} \abs{g}^2 \colon \mathbb{C}^n &\to \mathbb{R} \\ z &\mapsto \left[ \sum_{i=1}^n \left( x_i^2 - y_i^2 \right)\right]^2 + 4 \left[ \sum_{i=1}^n x_iy_i \right]^2 \end{align*} whose differentials are \begin{align*} \partial_{x_k} \abs{g}^2 &= 4x_k \sum_{i=1}^n \left( x_i^2 - y_i^2 \right) + 8y_k \sum_{i=1}^n x_i y_i, \\ \partial_{y_k} \abs{g}^2 &= -4y_k \sum_{i=1}^n \left( x_i^2 - y_i^2 \right) + 8x_k \sum_{i=1}^n x_i y_i. \end{align*} Since $\crit \abs{g}^2 \supset g^{-1}(0)$, any neighbourhood of $0$ contains a non-zero critical point of $\abs{g}^2$ and it follows that $0$ is a degenerate critical point of $\abs{g}^2$. \end{proof} \section{SLFs in higher dimensions via Lie theory} Let $\mathfrak g$ be a complex semisimple Lie algebra with Cartan subalgebra $\mathfrak h$, and $\mathfrak h_\mathbb R$ the real subspace generated by the roots of $\mathfrak h$. An element $H\in \mathfrak{h}$ is called \textit{regular} if $\alpha \left( H\right) \neq 0$ for all $\alpha \in \Pi $. \begin{theorem}\label{thm-prin-1} \cite[thm. 3.1] {GGS1} Given $H_{0}\in \mathfrak{h}$ and $H\in \mathfrak{h}_{\mathbb{R}}$ with $H$ a regular element, the potential $f_{H}:\mathcal{O} \left( H_{0}\right) \rightarrow \mathbb{C}$ defined by \[ f_{H}\left( x\right) = \langle H,x\rangle \qquad x\in \mathcal{O}\left(H_{0}\right) \] has a finite number of isolated singularities and defines a Lefschetz fibration; that is to say \begin{enumerate} \item the singularities are (Hessian) nondegenerate; \item if $c_{1},c_{2}\in \mathbb{C}$ are regular values then the level manifolds $f_{H}^{-1}\left( c_{1}\right) $ and $f_{H}^{-1}\left( c_{2}\right) $ are diffeomorphic; \item there exists a symplectic form $\Omega $ on $\mathcal{O}\left(H_{0}\right) $ such that the regular fibres are symplectic submanifolds; \item each critical fibre can be written as the disjoint union of affine subspaces contained in $\mathcal O \left( H_0 \right)$, each symplectic with respect to $\Omega$. \end{enumerate} \end{theorem} The full proof is presented in \cite{GGS1}, a particularly interesting component of the proof states: \begin{proposition}\cite[prop. 3.3]{GGS1} A point $x\in \mathcal O (H_0)$ is a critical point of $f_{H}$ if and only if $x\in \mathcal{O}\left( H_{0}\right) \cap \mathfrak{h}=\mathcal{W}\cdot H_{0}$, where $\mathcal{W}$ is the Weyl group. \end{proposition} Having found a construction of Lefschetz fibrations in higher dimensions, the next step toward a description of the Fukaya--Seidel category of the corresponding LG model would involve the identification of the Fukaya category of a regular fibre. Thus, we studied the diffeomorphism type of a regular level for the Lefschetz fibration. This first required the realisation of the adjoint orbit as the cotangent bundle of a flag manifold, as we now describe. We choose a set of positive roots $\Pi ^{+}$ and simple roots $\Sigma \subset \Pi ^{+}$ with corresponding Weyl chamber is $\mathfrak{a}^{+}$. A subset $\Theta \subset \Sigma $ defines a parabolic subalgebra $% \mathfrak{p}_{\Theta }$ with parabolic subgroup $P_{\Theta }$ and a flag manifold $\mathbb{F}_{\Theta }=G/P_{\Theta }$. An element $H_{\Theta }\in \mathrm{cl}\mathfrak{a}^{+}$ is \textit{characteristic} for $\Theta \subset \Sigma $ if $\Theta =\{\alpha \in \Sigma :\alpha \left( H_{\Theta }\right) =0\}$. Let $Z_{\Theta }=\{g\in G:\mathrm{Ad}\left( g\right) H_{\Theta }=H_{\Theta }\}$ be the centraliser in $G$ of the characteristic element $H_{\Theta }$. \begin{theorem}\cite[thm. 2.1]{GGS2}\label{iso} \label{teodifeocotan}The adjoint orbit $\mathcal{O}\left( H_{\Theta }\right) =\mathrm{Ad}\left( G\right) \cdot H_{\Theta }\approx G/Z_{\Theta }$ of the characteristic element $H_{\Theta }$ is a $C^{\infty }$ vector bundle over $% \mathbb{F}_{\Theta }$ isomorphic to the cotangent bundle $T^{\ast }\mathbb{F}% _{\Theta }$. Moreover, we can write down a diffeomorphism $\iota :\mathrm{Ad% }\left( G\right) \cdot H_{\Theta }\rightarrow T^{\ast }\mathbb{F}_{\Theta }$ such that \begin{enumerate} \item $\iota $ is equivariant with respect to the actions of $K$, that is, for all $k\in K$, \begin{equation*} \iota \circ \mathrm{Ad}\left( k\right) =\widetilde{k}\circ \iota \end{equation*}% where $K$ is the compact subgroup in the Iwasawa decomposition $G=KAN$, and $\widetilde{k}$ is the lifting to $T^{\ast }\mathbb{F}_{\Theta }$ (via the differential) of the action of $k$ on $\mathbb{F}_{\Theta }$. \item The pullback of the canonical symplectic form on $T^{\ast }\mathbb{F}% _{\Theta }$ by $\iota $ is the (real) Kirillov--Kostant--Souriaux form on the orbit. \end{enumerate} \end{theorem} Viewing the orbit as the cotangent bundle of a flag manifold, we can identify the topology of the of the fibres in terms of the topology of the flag. \begin{corollary}\cite[cor. 4.5]{GGS1} The homology of a regular fibre coincides with the homology of $\mathbb{F}_{\Theta }\setminus \mathcal{W}\cdot H_{\Theta}$. In particular the middle Betti number is $k-1$ where $k$ is the number of singularities of the fibration (equal to the number of elements in $\mathcal W \cdot H_\Theta$). \end{corollary} For the case where singular fibres have only one critical point, we have the following corollary. \begin{corollary}\cite[cor. 5.1]{GGS1} \label{cor.sing} The homology of the singular fibre though $ w H_\Theta$, $w \in \mathcal{W}$, coincides with that of \[ \mathbb{F}_{H_\Theta} \setminus \set{uH_\Theta \in \mathcal{W}\cdot H_{\Theta} | u\neq w}. \] In particular, the middle Betti number of this singular fibre equals $k-2$, where $k$ is the number of singularities of the fibration $f_H$. \end{corollary} \section{Compactifications and their Hodge diamonds} Theorem \ref{iso} makes it clear that the adjoint orbits considered here are not compact. We want to compare the behaviour of vanishing cycles on $\mathcal O(H_0)$ and on its compactifications. Expressing the adjoint orbit as an algebraic variety, we homogenise its ideal to obtain a projective variety, which serves as a compactification. We calculate the sheaf-cohomological dimensions $\dim H^q (X, \Omega^p)$ for the compactified orbits as well as for the fibres of the SLF. These dimensions shall be called the diamond for the given space; indeed, this is well-known as the Hodge diamond in the non-singular case. Calculating such diamonds is computationally heavy, so we used Macaulay2. Choosing a compactification is in general a delicate task: a different choice of generators for the defining ideal of the orbit can result in completely different diamonds of the corresponding compactification, as example \ref{badhd} will show. To illustrate the behaviour of diamonds, we present some examples of adjoint orbits for $\mathfrak{sl}(3, \mathbb C)$, for which there are three isomorphism types. We chose one that compactifies smoothly and another whose compactification acquires degenerate singularities. \subsection{An SLF with 3 critical values} In $\mathfrak{sl} (3, \mathbb C)$, consider the orbit $\mathcal O (H_0)$ of \[ H_0 = \begin{pmatrix} 2 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{pmatrix} \] under the adjoint action. We fix the element \[ H = \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \] to define the potential $f_H$. A general element $A \in \mathfrak{sl} \left( 3, \mathbb{C} \right)$ has the form \begin{equation}\label{gen3}A=\left(\begin{matrix} x_1 & y_1& y_2\\ z_1 & x_2 & y_3\\ z_2 & z_3 & -x_1 - x_2 \end{matrix}\right)\text{.} \end{equation} In this example, the adjoint orbit $\mathcal{O} (H_0)$ consists of all the matrices with the minimal polynomial $(A + \id)(A - 2\id)$. So the orbit is the affine variety cut out by the ideal $I$ generated by the polynomial entries of $(A + \id)(A - 2\id)$. To obtain a projectivisation of $X$, we first homogenise its ideal $I$ with respect to a new variable $t$, then take the corresponding projective variety. In this case, the projective variety $\overline{X}$ is a smooth compactification of $X$ and has Hodge diamond: \[ \begin{array}{ccccccccc} &&&& 1 &&&& \\ &&& 0 && 0 &&& \\ && 0 && 2 && 0 && \\ & 0 && 0 && 0 && 0 & \\ 0 && 0 && 3 && 0 && 0 \\ & 0 && 0 && 0 && 0 & \\ && 0 && 2 && 0 && \\ &&& 0 && 0 &&& \\ &&&& 1 &&&& \end{array} \text{.} \] We now calculate the Hodge diamond of a compactified regular fibre. The potential corresponding to our choice of $H$ is $f_H = x_1 - x_2$. The critical values of this potential are $\pm 3$ and $0$. Since all regular fibres of an SLF are isomorphic, it suffices to chose the regular value $1$. We then define the regular fibre $X_1$ as the variety in $\mathfrak {sl} (3, \mathbb C) \cong \mathbb C^8$ corresponding to the ideal $J$ obtained by summing $I$ with the ideal generated by $f_H-1$. We then homogenise $J$ to obtain a projectivisation $\overline{X}_1$ of the regular fibre $X_1$. The Hodge diamond of $\overline{X}_1$ is: \[ \begin{array}{ccccccc} &&& 1 &&& \\ && 0 && 0 && \\ & 0 && 2 && 0 & \\ 0 && 0 && 0 && 0 \\ & 0 && 2 && 0 & \\ && 0 && 0 && \\ &&& 1 &&& \end{array}\text{.} \] \begin{remark} An interesting feature to observe here is the absence of middle cohomology for the regular fibre. Suppose that the potential extended to this compactification without degenerate singularities, then because $f_H$ has singularities, the fundamental lemma of Picard--Lefschetz theory would imply that there must exist vanishing cycles, which contradicts the absence of middle homology. \end{remark} Generalising this example to the case of $\mathfrak{sl}(n,\mathbb C)$, we obtained: \begin{proposition}\cite[Prop. 2]{CG} Let $H_0 = \diag (n, -1, \dotsc, -1)$. Then the orbit of $H_0$ in $\mathfrak{sl}(n+1,\mathbb C)$ compactifies holomorphically to a trivial product. \end{proposition} \begin{corollary} \cite[Cor 3]{CG} Choose $H = \diag(1,-1,0, \dotsc, 0) $ and $H_0 = \diag (n, -1, \dotsc, -1)$ in $\mathfrak{sl}(n+1,\mathbb C)$. Any extension of the potential $f_H$ to the compactification $\mathbb P^n\times {\mathbb P^n}^{\ast}$ of the orbit $\mathcal O (H_0)$ cannot be of Morse type; that is, it must have degenerate singularities. \end{corollary} \subsection{An SLF with 4 critical values}\label{reg1} In $\mathfrak{sl}\left( 3,\mathbb{C}\right) $ we take \[ H=H_{0}=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0% \end{array}% \right) , \]% which is regular since it has 3 distinct eigenvalues. Then $X= \mathcal{O}\left( H_{0}\right) $ is the set of matrices in $\mathfrak{sl}\left( 3,\mathbb{C}\right) $ with eigenvalues $1,0,-1$. This set forms a submanifold of real dimension $6$ (a complex threefold). In this case $\mathcal{W}\simeq S_3$ acts via conjugation by permutation matrices. Therefore, the potential $f_H=x_1-x_2$ has 6 singularities; namely, the 6 diagonal matrices with diagonal entries $1,0,-1$. The four singular values of $f_H$ are $\pm 1, \pm2$. Thus, $0$ is a regular value for $f_H$. Let $A \in \mathfrak{sl}(3,\mathbb C)$ be a general element written as in (\ref{gen3}), and let $p= \det(A)$, $q=\det(A-\id)$. The ideals $\langle p,q\rangle $ and $\langle p-q,q\rangle $ are clearly identical and either of them defines the orbit though $H_0$ as an affine variety in $ \mathfrak{sl}\left( 3,\mathbb{C}\right) $. Now $$I = \langle p,q,f_H\rangle \qquad J=\langle p,p-q,f_H\rangle $$ are two identical ideals cutting out the regular fibre $X_0$ over $0$. Let $I_{\text{hom}}$ and $J_{\text{hom}}$ be the respective homogenisations and notice that $I_{\hom}\neq J_{\hom}$, so that they define distinct projective varieties, and thus two distinct compactifications \begin{align*} \overline X_0^I &= \Proj (\mathbb C[ x_1,x_2,y_1,y_2,y_3,z_1,z_2,z_3,t]/I_{\hom}) \quad \text{and} \\ \overline X_0^J &= \Proj(\mathbb C[x_1,x_2,y_1,y_2,y_3,z_1,z_2,z_3,t]/J_{\hom}) \end{align*} of $X_0$. Their diamonds are given in figure \ref{badhd}. \begin{figure}[htp] \begin{subfigure}[b]{0.45\textwidth} $\begin{array}{ccccccccccc} &&&&& 1 \cr &&&& 0 && 0 \cr &&& 0 && 1 && 0 \cr && 0 && 0 && 0 && 0 \cr & 0 && 0 && 1 && 0 && 0 \cr 0 && 16 && ? && ? && 16 && 0 \cr & 0 && 0 && 1 && 0 && 0 \cr && 0 && 0 && 0 && 0 \cr &&& 0 && 1 && 0 \cr &&&& 0 && 0 \cr &&&&& 1 \cr \end{array}$ \label{fig:110ur} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\textwidth} $\begin{array}{ccccccccccc} &&&&& 1 &&&&& \\ &&&& 0 && 0 &&&& \\ &&& 0 && 1 && 0 &&& \\ && 0 && 0 && 0 && 0 && \\ & 0 && 0 && 1 && 0 && 0 & \\ 0 & & 1 && ? && ? && 1 && 0 \\ & 0 && 0 && 1 && 0 && 0 & \\ && 0 && 0 && 0 && 0 && \\ &&& 0 && 1 && 0 &&& \\ &&&& 0 && 0 &&&& \\ &&&&& 1 &&&&& \end{array}$ \label{fig:110sr} \end{subfigure} \caption{The diamonds of two projectivisations $\overline X_0^I$ (left) and $\overline X_0^J$ (right) of the regular fibre corresponding to $H=H_0=\Diag(1,-1,0)$. } \label{badhd} \end{figure} \begin{remark} The variety $\overline X_0^J$ is an irreducible component of $\overline X_0^I$. Indeed, we find that $I \subset J$ and that $J$ is a prime ideal (whereas $I$ is not). The discrepancy of values in the middle row is corroborated by the discrepancy between the expected Euler characteristics of the compactifications. \end{remark} \begin{remark} Macaulay2 greatly facilitates cohomological calculations that are unfeasible by hand. However, the memory requirements rise steeply with the dimension of the variety. The unknown entries in our diamonds (marked with a `?') exhausted the 48GB of RAM of the computers of our collaborators at IACS Kolkata, without producing an answer. \end{remark} \begin{question*} This leaves us with the open question of characterising all the compactifications of a given orbit produced by the method of homogenising the defining ideals. \end{question*} Nevertheless, once again methods of Lie theory provide us with sharper tools, and we obtain a compactification that is natural from the Lie theory viewpoint. Let $w_{0}$ be the principal involution of the Weyl group $\mathcal{W}$, that is, the element of highest length as a product of simple roots. For a subset $\Theta \subset \Sigma $ we put $\Theta ^{\ast }=-w_{0}\Theta $ and refer to $\mathbb{F}_{\Theta ^{\ast }}$ as the flag manifold dual to $\mathbb{F}% _{\Theta }$. If $H_{\Theta }$ is a characteristic element for $% \Theta $ then $-w_{0}H_{\Theta }$ is characteristic for $\Theta ^{\ast }$. Then the diagonal action of $G$ on the product $\mathbb{F}_{\Theta }\times \mathbb{F}_{\Theta ^{\ast }}$ as $(g,(x,y))\mapsto (gx,gy)$, $g\in G$% , $x,y\in \mathbb{F}$ has just one open and dense orbit which is $G/Z_{\Theta }$. Let $x_{0}$ be the origin of $\mathbb{F}_{\Theta }$. Since $G$ acts transitively on $\mathbb{F}_{\Theta}$, all the $G$-orbits of the diagonal action have the form $G\cdot (x_{0},y)$, with $y\in \mathbb{F}_{\Theta ^{\ast }}$. Thus, the $G$-orbits are in bijection with the orbits through $wy_{0}$, $w\in \mathcal{W}$, where $y_{0}$ is the origin of $\mathbb{F}_{\Theta ^{\ast }}$. We obtain: \begin{proposition}\cite[Prop. 3.1]{GGS2} The orbit $G\cdot (x_{0},{w}_{0}y_{0})$ is open and dense in $\mathbb{F% }_{\Theta }\times \mathbb{F}_{\Theta ^{\ast }}$ and identifies to $G/Z_{H}$. \end{proposition} \begin{remark} Katzarkov, Kontsevich, and Pantev \cite{KKP} give three definitions of Hodge numbers for Landau--Ginzburg models and conjecture their equivalence. Understanding the relation between the diamonds we presented here and those Hodge numbers provides a new perspective to our work. \end{remark}
{ "timestamp": "2015-04-14T02:10:50", "yymm": "1504", "arxiv_id": "1504.03054", "language": "en", "url": "https://arxiv.org/abs/1504.03054" }
\section{Introduction} \label{sec:introduction} The large scale structure of spiral arms in the Milky Way has been a subject of great interest for understanding the dynamics of the Galaxy and for interpreting its properties. However, there have been long standing disagreements about the number of arms and their physical parameters. While a majority of published papers favor a four-arm structure others prefer a two-arm structure with a small pitch angle, allowing nearly two turns of the arms within the solar circle. In a series of papers \citet{vallee2013,vallee2014apjs,vallee2014b,vallee2014aj} attempted a statistical modeling analysis of all assembled recent positional data on the Milky Way's spiral arms and observed tangencies in a number of tracers such as CO, H\,{\sc i}, methanol masers, hot and cold dust, and FIR cooling lines, such as [C\,{\sc ii}]. The most recent version of his idealized synthesized Galactic map can be found in \citet{vallee2014apjs}. Modeling the Galactic spiral structure is based mostly on the data at the spiral arm tangents as observed by different gas tracers and stars. However each of these spiral arm tracers can occupy a separate lane, or layer, across an arm \citep[e.g.][]{vallee2014apjs,vallee2014aj}, resulting in an inconsistency among the various models extracted from observational data. In this paper we present new [C\,{\sc ii}]\space spectral line {\it l-V} maps of the Galactic plane covering {\it l=}326\fdg6 to 341\fdg4 and {\it l}=304\fdg9 to 305\fdg9 as obtained by {\it Herschel}\footnote{{\it Herschel} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.} HIFI On-The-Fly (OTF) mapping. These maps illuminate the structure of different gas components in the spiral arms. Using the [C\,{\sc ii}], H\,{\sc i}\space and $^{12}$CO\space maps of the Crux, Norma and start of the Perseus tangencies, we derive the intensity profiles of their emissions across these spiral arms and quantify the relative displacement of the compressed warm ionized medium (WIM), atomic, and molecular gas lanes with respect to the inner and outer edges of the arms. Our results reveal an evolutionary transition from the lowest to the highest density states induced by the spiral arm potential. This compressed WIM component traced by [C\,{\sc ii}]\space is distinct from the ionized gas in H\,{\sc ii}\space regions which traces the spiral arms with characteristics similar to those of molecular gas traced by CO \citep[cf.][]{vallee2014apjs,downes1980}. The spiral tangent regions \citep[c.f.][]{vallee2008,benjamin2009} are ideal laboratories in which to study the interaction of the interstellar gas and spiral density waves in the Milky Way. The tangents provide a unique viewing geometry with sufficiently long path lengths in relatively narrow velocity ranges to detect the diffuse WIM component traced by [C\,{\sc ii}]\space emission and to study its relationship to the neutral H\,{\sc i}\space and molecular $^{12}$CO\space gas components within spiral arms and the influence of spiral arm density waves on the interstellar medium (ISM). COBE FIRAS observed strong [C\,{\sc ii}]\space and [N\,{\sc ii}]\space emission along the Galactic spiral arm tangencies and \citet{steiman2010} fit the COBE results with four well-defined logarithmic spiral arms in the gaseous component of the ISM. However, COBE's 7$^{\circ}$\space beam and spectrally unresolved lines preclude obtaining detailed information on the scale and properties of the gas within the spiral tangencies, nor can one use the COBE data to separate the emission that arises from the Photon Dominated Regions (PDRs) from that in the WIM. The HIFI {\bf G}alactic {\bf O}bservations of {\bf T}erahertz {\bf C+} (GOT C+) survey \citep{Langer2010} of the Milky Way also detects the strongest [C\,{\sc ii}]\space emission near the spiral arm tangential directions \citep[][]{Pineda2013,velusamy2014}. In the velocity resolved HIFI spectra \citet{velusamy2012} separated the WIM component of the [C\,{\sc ii}]\space emission, in velocity space, from that in the molecular and neutral gas. They suggested that excitation by electrons in the WIM, with a density enhanced by the spiral arm potential, accounts for a low surface brightness [C\,{\sc ii}]\space excess observed at the tangent velocities along the Scutum-Crux spiral tangency. To determine whether a similar spatial and density distribution is a general property of Galactic spiral arms it is important to observe the velocity structure of the [C\,{\sc ii}]\space emission in other spiral arm tangencies and compare it with the corresponding H\,{\sc i}\space and $^{12}$CO\space emissions. Here we present a large scale ($\sim$ 15$^{\circ}$) position-velocity map of the Galactic plane in [C\,{\sc ii}]\space and derive the following characteristics of the spiral arm features: the relative locations of the peak emissions of the WIM, H\,{\sc i}, and molecular gas lanes, including the PDRs, and the width of each layer. In addition, we use the [C\,{\sc ii}]\space emission to derive the mean electron density in the WIM. These results confirm our earlier conclusion \citep[][]{velusamy2012} that in the velocity profile of [C\,{\sc ii}]\space emission at the Scutum tangency, the WIM and molecular gas components of [C\,{\sc ii}]\space are distinguished kinematically (appearing at well separated velocities around the tangent velocity). In the analysis presented here we use the fact that [C\,{\sc ii}]\space emission can arise in the three major constituents of the interstellar gas, namely, fully or partially ionized gas (as in the WIM), neutral atomic gas (as in HI clouds) and in H$_2$ molecular gas (as in CO clouds or PDRs) excited, respectively, by electrons, H atoms, and H$_2$ molecules. In the velocity resolved HIFI spectra these components are identified as demonstrated in the GOT C+ results \citep[c.f.][]{Pineda2013,langer2014_II,velusamy2014}. Furthermore, \cite{velusamy2014} show that in the inner Galaxy $\sim$ 60\% of the [C\,{\sc ii}]\space emission is tracing the H$_2$ molecular gas while at least 30\% of the [C\,{\sc ii}]\space is tracing the WIM and that emission in H\,{\sc i}\space excited by H atoms is not a major contributor. Here, to study spiral arm structure we identify two major [C\,{\sc ii}]\space components, one component stems from the WIM and the other from PDRs. We find that in the spiral arm tangencies the [C\,{\sc ii}]\space spectral line data alone can be used to study the relative locations of the WIM and molecular gas PDR layers. An outline of our paper is as follows. The observations are discussed in Section~\ref{sec:observations}. In Section~\ref{sec:results} we construct the spatial-velocity maps, and compare the distributions of [C\,{\sc ii}]\space with H\,{\sc i}\space and $^{12}$CO\space and their relation to the spiral arms. In Section~\ref{sec:discussion} we analyze the velocity structure of these gas components at the spiral arm tangencies and use it to infer the relative locations of the different gas ``lanes'' within the spiral arm. Note that in discussing the internal structure of the spiral arms we refer to the emission layers of the gas tracers as ``lanes'', analogous to the terminology used in external spiral galaxies. We also derive the average electron density in the WIM using the [C\,{\sc ii}]\space emission and a radiative transfer model. We summarize our results in Section~\ref{sec:summary}. \begin{figure}[!htb] \includegraphics[scale=0.45,angle=-90]{f1.ps} \caption{Spiral arms in the 4th quadrant are represented in a V$_{LSR}$--longitude ({\it V--l}) plot, adapted from \cite{vallee2008}: red: Norma--Cygnus; blue: Scutum--Crux; green: Sagittarius--Carina; magenta: Perseus. The rectangular boxes indicate the longitude extent in the Galactic plane, at latitude $b$ = 0$^{\circ}$, of the HIFI [C\,{\sc ii}]\space spectral line map data presented here. Note that the maps cover the tangencies of the Norma, Crux and start of the Perseus arms. The vertical dashed line represents a line of sight at a given longitude that intercepts multiple spiral arms, thus demonstrating the need for velocity resolved spectral line data to separate them. } \label{fig:fig1_vallee} \vspace{-0.5cm} \end{figure} \section{Observations} \label{sec:observations} The longitudinal and velocity coverage of the [C\,{\sc ii}]\space observations, at latitude $b$ = 0$^{\circ}$\space presented here are summarized in Figure~\ref{fig:fig1_vallee} along with a schematic of the spiral arm velocity--longitude relationship. All [C\,{\sc ii}]\space spectral line map observations were made with the high spectral resolution HIFI (de Graauw et al. 2010) instrument onboard {\it Herschel} (Pilbratt et al. 2010). These observations were taken between October 2011 and February 2012. We used 37 On-The-Fly (OTF) scans for the large scale [C\,{\sc ii}]\space map of the Galactic plane (at $b$ = 0$^{\circ}$) covering a 15$^{\circ}$\space range in longitude between 326\fdg6 and 341\fdg4 which include the Norma and Perseus tangencies. The observations of the fine-structure transition of C$^+$ ($^2$P$_{3/2}$ -- $^2$P$_{1/2}$) at 1900.5369 GHz were made with the HIFI band 7b using the wide band spectrometer (WBS). Each OTF scan was taken along the Galactic longitude at latitude $b$= 0$^{\circ}$\space and was 24 arcmin long. For the Crux tangency two OTF longitude scan data were used: (i) one 24 arcmin long centered at {\it l}=305\fdg1 and $b$= 0$^{\circ}$\space (note an earlier version of this map was presented in \citet{velusamy2014} and it is included here for completeness in the analysis of the tangencies), and, (ii) a shorter 6 arcmin long scan centered at {\it l}=305\fdg76 and $b$= 0\fdg15. All HIFI OTF scans were made in the LOAD-CHOP mode using a reference off-source position about 2 degrees away in latitude (at $b$ = 1\fdg9). However in our analysis we do not use off-source data (see below). All the 24 arcmin long OTF scans are sampled every 40 arcsec and the shorter scan every 20 arcsec. The total duration of each OTF scan was typically $\sim$2000 sec which provides only a short integration time on each spectrum (pixel). Thus the rms (0.22 K in T$_{mb}$) in the final maps with an 80$^{\prime\prime}$\space beam and 2 km s$^{-1}$\space wide channels in the OTF spectra is much larger than that in the HIFI spectra observed in the HPOINT mode, for example in the GOT C+ data. The observations for the Crux tangency used longer integrations. We processed the OTF scan map data following the procedure discussed in \citet[][]{velusamy2014}. The [C\,{\sc ii}]\space spectral line data were taken with HIFI Band 7 which utilized Hot Electron Bolometer (HEB) detectors. These HEBs produced strong electrical standing waves with characteristic periods of $\sim$320 MHz that depend on the signal power. The HIPE Level 2 [C\,{\sc ii}]\space spectra show these residual waves. We found that applying the {\it fitHifiFringe}\footnote{http://herschel.esac.esa.int/hcss-doc-12.0/index.jsp\#hifi\_um:hifi\-um section 10.3.2} task to the Level 2 data produced satisfactory baselines. However, removal of the HEB standing waves has remained a challenge up until the recent release of HIPE-12, which includes a new tool {\it hebCorrection}\footnote{http://herschel.esac.esa.int/hcss-doc-12.0/index.jsp\#hifi\_um:hifi\-um section 10.4.5} to remove the standing waves in the raw spectral data by matching the standing wave patterns (appropriate to the power level) in each integration using a database of spectra at different power levels (see Herschel Science Center (HSC) HIPE-12 release document for details). We used this HSC script to apply {\it hebCorrection} to create the final pipeline mapping products presented here. Following one of the procedures suggested by Dr. David Teyssier at the HSC, the OTF map data presented here were processed by re-doing the pipeline without off source subtraction to produce Level 1 data. The {\it hebCorrection} was then applied to this new Level 1 data. The fact that {\it hebCorrection} subtracts the matching standing wave patterns from a large database of spectra eliminates the need for off-source subtraction. Thus in our analysis the processed spectral data are free from any off source contamination. While fitting the HEB waves we also used the feature in the {\it hebCorrection} script to exclude the IF frequencies with strong [C\,{\sc ii}]\space emissions. Finally, the Level 2 data were produced from the HEB corrected Level 1 data. \begin{figure}[!ht] \includegraphics[scale=0.45, angle =-90]{f2.ps} \caption{Examples of a [C\,{\sc ii}]\space OTF longitudinal scan $l$--$V$ map centered at $l$ = 336\fdg0 and $b$ = 0\fdg0. The intensities are in main beam antenna temperature (T$_{\rm mb}$) with values indicated by the color wedge at the top. A square root color stretch is used to bring out the low brightness emission features. The velocity resolution in all maps is 2 km s$^{-1}$\space and the restored beam size in longitude is 80$^{\prime\prime}$.} \label{fig:fig2_otflvmap} \end{figure} From Level 2 data the [C\,{\sc ii}]\space maps were made as ``spectral line cubes'' using the standard mapping scripts in HIPE. Any residual HEB and optical standing waves in the reprocessed Level 2 data were minimized further by applying {\it fitHifiFringe} to the ``gridded'' spectral data (we took the additional precaution in {\it fitHifiFringe} of disabling {\it DoAverage} in order not to bias the spectral line ``window''). The H-- and V--polarization data were processed separately and were combined only after applying {\it fitHifiFringe} to the gridded data. This approach minimizes the standing wave residues in the scan maps by taking into account the standing wave differences between H-- and V--polarization. All OTF scan map data were reprocessed and analyzed in HIPE 12.1, as described above, to create spectral line data cubes. We then used the processed spectral line data cubes to make longitude--velocity ($l$--$V$) maps of the [C\,{\sc ii}]\space emission as a function of the longitude range in each of the 39 OTF observations. For HIFI observations we used the Wide Band Spectrometer (WBS) with a spectral resolution of 1.1 MHz for all the scan maps. The final $l$--$V$ maps presented here were restored with a velocity resolution of 2 km s$^{-1}$. At 1.9 THz the angular resolution of the Herschel telescope is 12$^{\prime\prime}$, but the [C\,{\sc ii}]\space OTF observations used 40$^{\prime\prime}$\space sampling. Such fast scanning results in heavily undersampled scans broadening the effective beam size along the scan direction \citep[][]{mangum2007}. Therefore all [C\,{\sc ii}]\space maps have been restored with effective beam sizes corresponding to twice the sampling interval along the scan direction ($\sim$ 80$^{\prime\prime}$). Figure~\ref{fig:fig2_otflvmap} shows an example of a $l$--$V$ map reconstructed using the map data processed in HIPE 12.1 for a single OTF scan map observed at longitude {\it l} = 336\fdg0. To compare the distribution of atomic and molecular gas with the ionized gas we use the $^{12}$CO\space and H\,{\sc i}\space data in the southern Galactic plane surveys available in the public archives. The $^{12}$CO(1-0) data are taken from the Three-mm Ultimate Mopra Milky Way Survey\footnote{www.astro.ufl.edu/thrumms} (ThrUMMS) observed with the 22m Mopra telescope \citep[][]{Barnes2011}. The H\,{\sc i}\space data are taken from the Southern Galactic Plane Survey (SGP) observed with the Australia Telescope Compact Array \citep[][]{McClure2005}. Although the Park HIPASS Galactic plane map of the H 166,167, \& 168$\alpha$ radio recombination lines (RRL) is now available \citep[][]{alves2015}, its low velocity resolution ($\sim$ 20 km s$^{-1}$) precludes using it for the analysis presented here. Furthermore, as the H\,{\sc ii}\space regions are associated with star formation and dense molecular gas \citep[e.g.][]{Anderson2009} the RRL emission from H\,{\sc ii}\space regions is not likely to add more information to the spiral arm structures than already traced by $^{12}$CO. \begin{figure*}[htbp] \includegraphics[scale=0.70, angle=-90]{f3.ps} \caption{The [C\,{\sc ii}]\space longitude-velocity ({\it l-V}) map covering $\sim$ 15$^{\circ}$\space in longitude over the range $l$ = 326\fdg6 to 341\fdg4 at $b$ = 0\fdg0. The intensities are in main beam antenna temperature (T$_{\rm mb}$) with values indicated by the color wedge. A square root color stretch is used to bring out the low brightness emission features. The velocity resolution in all maps is 2 km s$^{-1}$\space and the beam size along the longitudinal direction is 80$^{\prime\prime}$. A sketch of the spiral arms in the 4th quadrant, adopted from \citet{vallee2008}, is overlaid and are indicated in the following colors: red-solid: Norma--Cygnus; white-solid: Scutum--Crux; white-broken: Sagittarius--Carina; red-broken: Perseus. The rectangular boxes indicate the extent of the spiral tangencies as labeled.} \label{fig:fig3_[CII]map} \end{figure*} \section{Results} \label{sec:results} In this Section we present the $l$--$V$ emission maps and analyze the structure of the different gas components. We use a schematic of the expected relationship between velocity (V$_{LSR}$) and location with respect to Galactic center, to guide the analysis of the gas lane profile across the arm. We show that the emissions reveal an orderly change in gas components across the arms leading from the least dense WIM to the densest molecular clouds. \subsection{longitude--velocity maps} We ``stitched'' all the individual OTF maps (an example is shown in Figure~\ref{fig:fig2_otflvmap}) within the longitude range 326\fdg6 to 341\fdg4 to create a single longitude velocity map which includes the Perseus and Norma tangencies. The $l$--$V$ map is shown in Figure~\ref{fig:fig3_[CII]map}. An $l$--$V$ representation of the spiral arms is overlaid on the [C\,{\sc ii}]\space map. Note that strong [C\,{\sc ii}]\space emissions are seen in the tangencies (denoted by the boxes in Figure~\ref{fig:fig3_[CII]map}) as seen in the GOT C+ survey data \citep[][]{Pineda2013,velusamy2014} and COBE data \citep{steiman2010}. However, the rest of the spiral arm trajectories show poor correspondence with the brightness of [C\,{\sc ii}]\space emission likely due to the uncertainties in the model parameters (e.g. pitch angle) used for the spiral arms. We assembled the $l$--$V$ maps that match the [C\,{\sc ii}]\space map for the $^{12}$CO(1-0) (Figure~\ref{fig:fig 4_COmap}) and H\,{\sc i}\space (Figure~\ref{fig:fig5_HImap}) maps using the Mopra ThrUMMS survey and SGPS data. Note that the intensities for ThrUMMS $^{12}$CO(1-0) data are uncorrected for main beam efficiency \citep[][]{Ladd2005,Barnes2011}. \begin{figure*}[htbp] \includegraphics[scale=0.70, angle=-90]{f4.ps} \caption{The $^{12}$CO(1-0) longitude--velocity ({\it l-V}) map covering $\sim$ 15$^{\circ}$\space in longitude over the range $l$ = 326\fdg6 to 341\fdg4 at $b$ = 0\fdg0. The $^{12}$CO(1-0) data are taken from Mopra ThrUMMS survey \citep{Barnes2011}. See the caption in Figure~\ref{fig:fig3_[CII]map} for the color labels.} \label{fig:fig 4_COmap} \end{figure*} \begin{figure*}[hbp] \includegraphics[scale=0.70, angle=-90]{f5.ps} \caption{The H\,{\sc i}\space longitude-velocity ({\it l-V}) map covering the $\sim$15$^{\circ}$ range in longitude $l$ = 326\fdg6 to 341\fdg4 at $b$ = 0\fdg0. The H\,{\sc i}\space data are taken from the SGPS survey \citep{McClure2005}. See the caption in Figure~\ref{fig:fig3_[CII]map} for the color labels.} \label{fig:fig5_HImap} \end{figure*} The [C\,{\sc ii}]\space maps in Figure~\ref{fig:fig3_[CII]map} contain, in addition to information about the tangency, a rich data set on [C\,{\sc ii}]\space emission in the diffuse gas, the molecular gas, and PDRs; these properties have been analyzed in detail for a sparse galactic sample using the GOT C+ data base \citep[][]{Pineda2013,langer2014_II,velusamy2014}. However, the continuous longitude coverage in the data presented here offers a better opportunity to study the Galactic spiral structure and the internal structure of the arms. In principle it is possible to derive a 2--D spatial--intensity map of this portion of the Galaxy using the kinematic distances for each velocity feature in the maps. However, such a study is subject to the near-- far--distance ambiguities inherent in lines of sight inside the solar circle \citep[see discussion in][]{velusamy2014}, except along the tangencies. The tangencies offer a unique geometry in which to study kinematics of the interstellar gas without the distance ambiguity. Furthermore the tangential longitudes provide the longest path length along the line of sight through a cross section of the spiral arm thus making it easier to detect weak [C\,{\sc ii}]\space emission from the WIM and the molecular gas. In this paper we limit our analysis only to the [C\,{\sc ii}]\space emission in tangencies and compare it with those of H\,{\sc i}\space and $^{12}$CO\space to understand the structure of the spiral arms in different ISM components. \subsection{The tangent Emission-velocity profiles} In Figure~\ref{fig:fig6_l-overview} we show a schematic model of the internal structure of a spiral arm based on the results of \citet{vallee2014apjs} and \citet{velusamy2012}, using the Norma tangency as an example. The top panel (a) illustrates how the emission for the tangency probes the ``lanes'' seen in different tracers. In Vall\'{e}e's schematic (see his Figure 5) both H\,{\sc i}\space and [C\,{\sc ii}]\space emissions occur displaced from $^{12}$CO\space towards the inner edge (on the near side of the Galactic center). In Vall\'{e}e's sketch the displacement of [C\,{\sc ii}]\space with respect to $^{12}$CO\space is based the COBE data \citep[][]{steiman2010}. However in the velocity resolved HIFI data for the Scutum tangency \citep[][]{velusamy2012}, the WIM component of [C\,{\sc ii}]\space emission occurs near the inner edge while that associated with molecular gas and PDRs is coincident with $^{12}$CO\space emission. The boxed region in Figure~\ref{fig:fig6_l-overview} represents the area in the tangency over which the emission spectra are computed. The expected velocity profiles at the tangency, relative to the other emission layers, are shown schematically in Figure~\ref{fig:fig6_l-overview}(b) and these can be compared with the actual observed velocity profiles for each tracer in the maps in Figures~\ref{fig:fig3_[CII]map} to \ref{fig:fig5_HImap}. Thus by analyzing the velocity profiles of different gas tracers we can examine the location of the respective emission layers relative to each other within each spiral arm. \begin{figure}[htp] \hspace{-1.25cm} \includegraphics[scale=0.48,angle=0]{f6.eps} \caption{ (a) Schematic view of the Norma spiral arm tangency. The emissions (distinguished by the color) tracing the spiral arm are shown as a cross cut of the layers from the inner to outer edges (adapted from Figures 2 \& 3 in \citet[][]{vallee2014apjs}). (b) A sketch indicating the velocity (V$_{LSR}$) structure of the corresponding spectral line intensities near the tangency for each layer. Note that this cartoon is intended to be a schematic and is not to scale. } \label{fig:fig6_l-overview} \vspace{-0.5cm} \end{figure} In the $l$--$V$ maps the trajectory of each spiral arm radial velocity (V$_{LSR}$) as a function of Galactic longitude is a shown as a line. In reality, however, it is expected to be much broader and complex. Obviously there is no single unique value of the longitude that can be assigned as a tangency. Furthermore, the longitudes listed in the literature \citep[e.g.][]{vallee2014apjs} often correspond to data averaged over a wider range of Galactic latitudes. But the $l$--$V$ maps presented here are all in the Galactic plane ($b$ = 0\fdg0) observed with narrow beam sizes (12$^{\prime\prime}$, 33$^{\prime\prime}$, and 150$^{\prime\prime}$\space for [C\,{\sc ii}], $^{12}$CO, and H\,{\sc i}, respectively) in latitude. In the maps in Figures~\ref{fig:fig3_[CII]map} to \ref{fig:fig5_HImap} we refer to a range of longitudes for each spiral tangency as indicated by the box sizes. Using the $l$--$V$ maps in Figures~\ref{fig:fig3_[CII]map} to \ref{fig:fig5_HImap} we average the emissions within a 2$^{\circ}$\space and 2\fdg5 longitude range for the Perseus and Norma tangencies, respectively, and in all observed longitudes for the Crux tangency. The resulting averaged spectra are plotted in Figures~\ref{fig:fig7_Norma}(b) \& \ref{fig:fig8_Perseus}(c). Note that the $^{12}$CO\space spectra shown have new baselines fitted to the map data in Figure~\ref{fig:fig 4_COmap}. For clarity we limit the velocity range to cover only the tangencies. Furthermore the intensity scale for each spectrum is adjusted such that the highest value corresponds to its peak brightness listed in Table~\ref{tab:Table_1}. The tangent velocity is indicated on the spectra in each panel in Figures~\ref{fig:fig7_Norma}, \ref{fig:fig8_Perseus}, \& \ref{fig:fig10_Crux} in order to provide a reference to the V$_{LSR}$ velocities. The tangent velocities are estimated using the mean longitude and assuming Galactic rotational velocity (220 km s$^{-1}$) at the tangent points \citep[c.f.][]{Levine2008}. We note that the tangent velocity varies from panel to panel corresponding to its longitude range. The [C\,{\sc ii}]\space spectra of the tangencies show remarkably distinct differences compared to spectra of H\,{\sc i}\space or $^{12}$CO. [C\,{\sc ii}], unlike H\,{\sc i}\space or $^{12}$CO, shows a clear emission peak near the tangent velocity, well separated from the H\,{\sc i}\space and $^{12}$CO\space peaks. To bring out the uniqueness of such differences in the tangencies in Figures 7 and 8 we compare the spectra at tangencies (labeled ``On-tangent'') with those at neighboring longitudes (labeled ``Off-tangent''). As discussed below, only the spectra at the tangencies show the excess [C\,{\sc ii}]\space emission peak at more negative velocities. As illustrated in the schematic in Figure~\ref{fig:fig6_l-overview}, it is possible to delineate individually the emission layers or the lanes, bringing out the internal structure of the spiral arms, for [C\,{\sc ii}]\space in both the WIM and the PDRs, as well as that of molecular gas in $^{12}$CO\space and atomic gas in H\,{\sc i}. The characteristics of the observed velocity profiles in each tangency are summarized Table~\ref{tab:Table_1}. \begin{figure}[htp] \hspace{-0.5cm} \includegraphics[scale=0.48 ]{f7.ps} \caption{Norma tangency spectra. The [C\,{\sc ii}], H\,{\sc i}, and $^{12}$CO\space emission spectra are plotted against velocity (V$_{LSR}$). Each panel shows the spectra for the longitude ranges indicated. Note that the intensity scale is normalized to the peak emission within the velocity range. The corresponding 1--$\sigma$ error bars are indicated on the [C\,{\sc ii}]\space and $^{12}$CO\space spectra. The tangent velocity is marked on each panel by a vertical arrow. Panel (a): Off-tangent. Panel (b): Norma On-tangent. The dashed lines indicate the V$_{LSR}$ shift between the [C\,{\sc ii}]\space and $^{12}$CO\space peaks. } \label{fig:fig7_Norma} \end{figure} \begin{figure}[htbp] \hspace{-0.75cm} \includegraphics[scale=0.48 ]{f8.ps} \caption{Spectra for the location of the start of the Perseus tangency. Caption same as for Figure~\ref{fig:fig7_Norma}. Panels (a) \& (c): Off-tangent. Panel (b): Perseus On-tangent. The dashed lines indicate the shifted V$_{LSR}$ of peak emission of the [C\,{\sc ii}]\space and $^{12}$CO\space. } \label{fig:fig8_Perseus} \end{figure} \subsubsection{Norma tangency} The Norma tangency has been determined as 328$^{\circ}$\space for $^{12}$CO\space \citep[][]{Bronfman2000b} and H\,{\sc i}\space \citep[][]{Englmaier1999}, 329$^{\circ}$\space for 60$\mu$m\space dust \citep[][]{Bloemen1990}, and 332$^{\circ}$\space for 870$\mu$m\space dust \citep[][]{Beuther2012}. \citet{garcia2014} assign tangent directions to the Crux (Centaurus), Norma, and 3 kpc expanding arms of 310\fdg0, 330\fdg0, and 338\fdg0, respectively, by fitting a logarithmic spiral arm model to the distribution of Giant Molecular clouds (GMCs). As discussed above, the detection of [C\,{\sc ii}]\space from the WIM is strongest along the tangencies and thus is a good discriminator of the tangent direction of a spiral arm. Thus the emission profiles shown in Figure~\ref{fig:fig7_Norma} provide strong evidence that the longitude of the tangent direction is well constrained to $l$ $<$ 329\fdg5. In the On-tangent spectrum the $^{12}$CO\space emission shows a small peak near the tangent velocity. However, this feature is relatively weak when compared to the prominent emission seen in [C\,{\sc ii}]. \subsubsection{Start of Perseus tangency} The tangency at the start of the Perseus arm has been determined as 336$^{\circ}$\space for $^{12}$CO\space \citep[][]{Bronfman2000b}, 338$^{\circ}$\space for the [C\,{\sc ii}]\space \& [N\,{\sc ii}]\space FIR lines \citep[][]{steiman2010}, and 338$^{\circ}$\space for 870$\mu$m\space dust \cite[][]{Beuther2012}. \citet[][]{green2011} suggest that part of the Perseus arm could harbor some of the methanol masers found toward the tangent direction of the 3 kpc expanding arm. According to the spiral arm model of \citet[][]{russeil2003} the starting point of the Perseus arm would be found in the region between the Norma and the 3 kpc expanding arms. Though there are some doubts about this longitude being the start of the Perseus arm \citep[c.f.][]{green2011}, we adopt the longitude range $l$ = 336$^{\circ}$ -- 338$^{\circ}$\space as the start of the Perseus arm following the work of \citet[][]{vallee2014apjs}. The emission profiles in Figure~\ref{fig:fig8_Perseus} clearly show the detection of a tangent direction in this longitude range, as seen by the strong [C\,{\sc ii}]--WIM emission compared to the weaker [C\,{\sc ii}]\space emission in the neighboring longitudes on either side. \subsubsection{Crux tangency} \citet[][]{vallee2014apjs} lists the Crux\footnote{Also referred to as Centaurus as it appears in this constellation.} tangency as between $l$ = 309$^{\circ}$\space and 311$^{\circ}$\space in different tracers with CO at $l$ = 309$^{\circ}$\space \citep[][]{Bronfman2000b} and dust at $l$ = 311$^{\circ}$\space \citep[][]{Drimmel2000,Bloemen1990,Beuther2012}. However, using the GLIMPSE source counts \citet{benjamin2005} and \citet{churchwell2009} place the Crux tangency within a broader longitude range 306$^{\circ}$ $<$ $l$ $<$ 313$^{\circ}$. Furthermore it has been suggested that the Crux tangency provides an ideal testing ground for models of spiral density wave theory \citep[][]{benjamin2008}, as the $l$ = 302$^{\circ}$\space to 313$^{\circ}$\space direction is known to have several distinct anomalies, including large deviations in the H\,{\sc i}\space velocity field \citep[][]{McClure2007} and a clear magnetic field reversal \citep[][]{Brown2007}. Although we do not have a complete map of this region, it is of sufficient interest that we present a partial map covering $l$ = 304\fdg9 and 305\fdg9, which is close to the Crux tangency. This partial $l$--$V$ map, which comes from another of our [C\,{\sc ii}]\space {\it Herschel} projects, is shown in Figure~\ref{fig:fig9_Crux} along with the corresponding $^{12}$CO\space and H\,{\sc i}\space maps. Note that the velocity range for the emission in the $^{12}$CO\space map is much narrower than in the [C\,{\sc ii}]\space map indicative of a broader diffuse emission component in [C\,{\sc ii}]. \begin{figure}[!htbp] \includegraphics[scale=0.46]{f9.ps} \caption{The longitude-velocity ({\it l-V}) map covering part of the Crux tangency between longitudes $l$ = 304\fdg9 to 305\fdg9. (a) HIFI [C\,{\sc ii}]\space maps covered in two OTF longitude scans centered at $l$= 305.1$^{\circ}$ and 305.76$^{\circ}$ at $b$ = 0\fdg0 and 0\fdg15 respectively. (b) the corresponding $^{12}$CO\space map from the Mopra ThrUMMS survey. (c) the corresponding H\,{\sc i}\space map from SGPS survey. This map region is off to the right of the {\it l-V} trajectory of the Crux tangency as shown in Figure~\ref{fig:fig1_vallee} (see text). Also see the caption to Figure~\ref{fig:fig3_[CII]map} for details on the display.} \label{fig:fig9_Crux} \end{figure} The emission profiles for the Crux tangent region shown in Figure~\ref{fig:fig10_Crux} are very similar to those observed for the Scutum tangency \citep[][]{velusamy2014}. The [C\,{\sc ii}]\space emission shows a clear excess beyond the tangent velocity. However, unlike the Norma or Perseus tangencies, we do not detect a resolved [C\,{\sc ii}]\space emission peak and the WIM component appears as an enhanced emission shoulder under the H\,{\sc i}\space emission profile. This difference may partly be due to the fact that the longitude range of these maps is well outside the nominal tangent direction and may also be due to much stronger emissions seen in both [C\,{\sc ii}]\space and $^{12}$CO\space close to the tangent velocity. Nevertheless, the detection of the [C\,{\sc ii}]\space excess associated with low H\,{\sc i}\space and little, or no, $^{12}$CO\space and its similarity to the results for the Scutum tangency strongly favor its interpretation as the WIM. This detection of WIM in the longitudes $l$ = 304\fdg9 to 305\fdg9 indicates that the Crux tangency is likely to be much broader ($l$ = 302$^{\circ}$\space to 313$^{\circ}$) in longitude than the other arms as was indicated by the analysis of star counts in the {\it Spitzer} data \citep[][]{benjamin2005}. \begin{figure}[!ht] \includegraphics[scale=0.375, angle = -90 ]{f10.ps} \caption{ Crux tangency emission spectra showing [C\,{\sc ii}], H\,{\sc i}, and $^{12}$CO\space emission plotted against velocity (V$_{LSR}$). The spectra are averaged over the longitude range indicated. Note that the intensity scale is normalized to the peak emission within the velocity range. The corresponding 1--$\sigma$ error bars are indicated on the [C\,{\sc ii}]\space and $^{12}$CO\space spectra at V$_{LSR}$ = 50 km s$^{-1}$. The tangent velocity is also marked. The dashed lines indicate the [C\,{\sc ii}]\space excess beyond the tangent velocity and $^{12}$CO\space peak in the spectra. } \label{fig:fig10_Crux} \end{figure} \section{Discussion} \label{sec:discussion} The spectra in Figures~\ref{fig:fig7_Norma}, \ref{fig:fig8_Perseus}, and \ref{fig:fig10_Crux} bring out clearly the exceptional characteristics of the [C\,{\sc ii}]\space emission at the tangencies. These are: \begin{enumerate} \item At the highest velocities beyond the tangent velocity only [C\,{\sc ii}]\space shows a peak in emission while there is little, or no, $^{12}$CO\space and H\,{\sc i}\space is weak, but increasing slowly with velocity. (The low intensity $^{12}$CO\space peak near the tangent velocity in Figure~\ref{fig:fig7_Norma} is relatively less prominent when compared to the dominant [C\,{\sc ii}]\space emission); \item The velocity of the [C\,{\sc ii}]\space peak beyond the tangent velocity corresponds to the radial distance closest to the Galactic center. Therefore it is near the inner edge of the spiral arm, representing the onset of the spiral arm feature; \item The peak emissions for H\,{\sc i}\space and $^{12}$CO\space appear at still higher velocities than for the first [C\,{\sc ii}]\space peak corresponding to distances away from the inner edge; \item [C\,{\sc ii}]\space emission shows two peaks: one near or beyond the tangent velocity representing the WIM component traced by [C\,{\sc ii}]\space and the second corresponding to the molecular gas traced by [C\,{\sc ii}]\space observed in association with $^{12}$CO\space arising from the PDRs of the CO clouds; and, \item The observed velocity profiles are consistent with the schematic shown in Figure~\ref{fig:fig6_l-overview}, for the internal structure of the spiral arm. \end{enumerate} The anomalous excess [C\,{\sc ii}]\space emission in the velocity profiles for all the spiral arm tangencies represents the direct unambiguous detection of the large scale Galactic diffuse ionized gas (WIM) through its 158$\mu$m\space [CII] line emission. Our {\it Herschel} HIFI detection of the diffuse ionized gas provides detailed spatial and kinematic information on the nature of this gas component in the spiral arms that has not been possible with prior [C\,{\sc ii}]\space surveys. For example, in contrast to the direct detection of the WIM in [C\,{\sc ii}]\space in our HIFI data, the deduction of the WIM component in the COBE [C\,{\sc ii}]\space data \citep[][]{steiman2010} is indirect because it depends on using the [N\,{\sc ii}]\space intensities to separate the fraction of [C\,{\sc ii}]\space intensity from the WIM from other gas components, and is model dependent. We can be certain that the [C\,{\sc ii}]\space emissions near the tangent velocity come from the highly ionized WIM and are the result of electron excitation of C$^+$ and not from H atom excitation from diffuse H\,{\sc i}\space clouds (warm neutral medium or cold neutral medium) for the following reasons, as first noted in our earlier study of the WIM in the Scutum--Crux arm \citep[][]{velusamy2012,velusamy2014}. In the spectra in Figures~\ref{fig:fig7_Norma} \& \ref{fig:fig8_Perseus}, the H\,{\sc i}\space emission at the lowest velocities (near the left around the tangent velocities) is seen along with [C\,{\sc ii}]\space only for the longitudes of tangent directions which are identified to have [C\,{\sc ii}]\space in the WIM (Figures~\ref{fig:fig7_Norma}(b) \& \ref{fig:fig8_Perseus}(b)). However, in all other longitude directions (Figures~\ref{fig:fig7_Norma}(a), \ref{fig:fig8_Perseus}(a) \& \ref{fig:fig8_Perseus}(c)) strong H\,{\sc i}\space emission is seen with little or no associated [C\,{\sc ii}]\space emission. If H atom collisional excitation contributed to any of the [C\,{\sc ii}]\space emission we identify as coming from the WIM, then we should have seen [C\,{\sc ii}]\space associated with the H\,{\sc i}\space emission for all longitudes in Figures~\ref{fig:fig7_Norma} \& ~\ref{fig:fig8_Perseus}. However, we see none. It is even more unlikely that [C\,{\sc ii}]\space excess in the tangents is associated with CO-dark H$_2$ gas. In the tangent region the velocity profiles beyond the tangent velocities the [C\,{\sc ii}]\space emission starts appearing at lower velocities than $^{12}$CO. In contrast in the Off-tangent regions in the velocity profiles both [C\,{\sc ii}]\space and $^{12}$CO\space begin to appear simultaneously. Considering the relatively weak $^{12}$CO\space emission, if at all, we expect only a small fraction the [C\,{\sc ii}]\space excess at the tangent is excited by H$_2$ molecules. Therefore we assume that [C\,{\sc ii}]\space excess is dominated by contribution from excitation by electrons. The detection of WIM emission from our HIFI OTF survey is surprising because the average electron density in the WIM throughout the disk, $\sim$few$\times$10$^{-2}$ cm$^{-3}$, is too low to result in detectable [C\,{\sc ii}]\space emission at the sensitivity of our HIFI OTF maps. Our explanation for the [C\,{\sc ii}]\space emission detected in our survey is that it originates from denser ionized gas along the inner edge of the spiral arms that has been compressed by the spiral density wave shocks (Figure~\ref{fig:fig6_l-overview}) as previously discussed by \citet{velusamy2012} for the Scutum--Crux arm. Indeed, as shown below, the electron density is significantly higher in the spiral arm WIM than between the arms. In what follows we derive the physical characteristics of the spiral arm gas lanes and the electron density in the WIM lane. \subsection{The internal structure of spiral arms traced by [C\,{\sc ii}], H\,{\sc i}, and $^{12}$CO\space} We can resolve the anatomical structure of the spiral arms, namely how each gas component is arranged spatially as a function of distance from its edges, using the velocity profiles in the tangencies. We locate the peak emissions in the layers traced by [C\,{\sc ii}]--WIM component, H\,{\sc i}\space (diffuse atomic clouds), $^{12}$CO\space (the GMCs), and the [C\,{\sc ii}]\space in molecular gas (PDRs), as indicated in the cartoon in Figure~\ref{fig:fig6_l-overview}, as a function of radial distance from the Galactic center (GC). The radial distances are derived from the observed radial velocities (V$_{LSR}$) assuming a constant Galactic rotation speed of 220 km s$^{-1}$\space for radius $>$ 3 kpc, at the distance of these spiral arm tangencies. The Galactocentric distances derived from the V$_{LSR}$ are listed in Table~\ref{tab:Table_1}. The uncertainty in the Galactocentric distances, which depends on the assumed Galactic rotational speed at the tangency, does not affect our results significantly, as we are interested only in the relative displacement between them. It is easy to characterize the emission profiles on the near side to the Galactic Center near the tangent velocities where the emissions are increasing from zero. On the far side of the Galactic Center the emissions become too complex to extract the profile parameters due to confusion by emission from adjacent arms. This confusion is especially prominent on either side of the peak of the [C\,{\sc ii}]\space-WIM component. Therefore, to derive the width of the emission lanes of each tracer, we use only the cleaner profile on the rising portion of spiral feature on the near side to the Galactic Center. We compute the Galactocentric radial distances to the V$_{LSR}$ of peaks and the half intensity points of each emission profile. Using these radial distances we get the distance between the peak and half intensity points in each emission lane. The total width is obtained by simply doubling this value and the results are summarized in Tables~\ref{tab:Table_1} and \ref{tab:Table_2}. We derive an approximate lower bound on the width for the spiral arm by assuming [C\,{\sc ii}]--WIM traces the inner edge and $^{12}$CO\space the outer edge, and list them in Table~\ref{tab:Table_2}. Note that in the anatomy suggested by \citet[][]{vallee2014apjs} the hot dust emission traces the inner edge on the near side of the Galactic center with $^{12}$CO\space tracing the midplane while [C\,{\sc ii}]\space is in between. Using the parameters in Table~\ref{tab:Table_1} we sketch the cross cut view of the spiral arm in Figure~\ref{fig:fig11_xcut}. We plot each lane tracing the [C\,{\sc ii}]--WIM, H\,{\sc i}, and molecular gas ($^{12}$CO) by an approximate Gaussian profile. Note we did not plot the lanes for the Crux arm because the available data do not include the major part of this tangency (see Section~\ref{sec:results}.3). We list the results in Table 2 as a rough estimate for this tangency because of the poor longitude coverage of the tangency. Nevertheless, the largest arm width inferred for Crux seems to be consistent with narrower widths for Perseus and Norma. The Crux spiral tangency is at the largest radial distance ($\sim$ 7 kpc) in contrast to $\sim$ 3.5 -- 4.5 kpc for the other arms studied here. The cross-cut emission profiles of the internal structures (Figure~\ref{fig:fig11_xcut}) for Norma and the start of the Perseus spiral arms are quite consistent with each other. The overall sizes are $\sim$ 480 pc and $\sim$ 500 pc for the Perseus and Norma arms, respectively. To be consistent with \citet[][]{vallee2014apjs} we present the location of the emission lanes of [C\,{\sc ii}]--WIM and H\,{\sc i}\space with respect to the location of the $^{12}$CO\space peak emission. We identify the inner edge of the arm as the location of the half power point in the [C\,{\sc ii}]\space emission profile on the near side of the Galactic Center and on far side from $^{12}$CO. Similarly we identify the outer edge as the location of the half power point in $^{12}$CO\space emission profile on the far side from the Galactic Center. We calculate the spiral arm sizes as the distance between inner and outer edges as marked in Figure~\ref{fig:fig11_xcut}. With the exception of the Crux arm (which also has the poor longitude coverage of the tangency in our analysis), the overall sizes are $\sim$ 480 pc and $\sim$ 500 pc for the Perseus and Norma arms, respectively. The similar width for these arms suggests a width of $\sim$ 500 pc is likely to be characteristic of all spiral arms in the Galaxy. Our result is close to that obtained by \citet[][]{vallee2014apjs} who estimates a mean width $\sim$ 600 pc for the spiral arms using the range of tangent directions observed using hot dust at the inner edge and $^{12}$CO\space at the outer edge. Our analysis and the values for the widths presented here are likely to be less ambiguous than those of \citet[][]{vallee2014apjs}, as our data are based on fully sampled longitudinal maps in the galactic plane observed with similar spatial and velocity resolutions. Another difference between our data and those used by \citet[][]{vallee2014apjs} is that in his data the tangent directions were observed using maps averaged over a range of latitudes while ours are only in the plane at $b$ = 0\fdg0. \begin{figure}[!ht] \includegraphics[scale=0.38, angle = -90 ]{f11.ps} \caption{Fitted spiral arm structure of the emission lanes. Cross cut view of the structure of the spiral arms Norma and start of Perseus are shown including the relative locations and widths of the emission lanes. The intensity scale is arbitrary. } \label{fig:fig11_xcut} \end{figure} \begin{table}[htbp] \begin{center} \caption{Observed spiral arm parameters at the Perseus, Norma and Crux tangencies} \renewcommand*{\arraystretch}{1.4} \setlength{\tabcolsep}{0.1cm} \begin{tabular} {l c c c } \hline\hline {\bf Spiral Arm} & {\bf Perseus} & {\bf Norma} & {\bf Crux}$^1$ \\ \hline \multicolumn {4}{l}{Tangency}\\ longitude $l$ = & 336\fdg0 - 338\fdg0 & 327\fdg0 - 329\fdg5 & 304\fdg9 - 305\fdg9 \\ \hline \multicolumn {4}{l} {\bf Emissions: V$_{peak}$ (km s$^{-1}$) } \\ [C\,{\sc ii}]--WIM & -126 & -99.5 & -52 \\ [C\,{\sc ii}]\space (molecular)$^2$ & -113 & -91.5 &-35 \\ $^{12}$CO\space & -117 &-91 & -32.5\\ H\,{\sc i}\space &-118 & -92 & -38\\ \hline \multicolumn {4}{l} {\bf Emissions: Peak brightness T$_{mb}$(K) }\\ [C\,{\sc ii}]--WIM & 0.56 & 0.35 & $\sim$ 0.94\\ [C\,{\sc ii}](molecular) & 0.58 & 0.35 & 5.0\\ $^{12}$CO\space$^3$ & 2.03 &2.12 & 4.8 \\ H\,{\sc i}\space &70 &118 & 106 \\ \hline \multicolumn {4}{l} {\bf Galactocentric distance of emission layers (kpc)$^4$} \\ [C\,{\sc ii}]--WIM & 3.45 & 4.57 & 6.59 \\ $^{12}$CO\space (Mid plane) & 3.60 &4.76 &7.20 \\ H\,{\sc i}\space &3.58 &4.74 &7.01 \\ \hline\hline \label{tab:Table_1} \end{tabular} \end{center} \vspace{-0.7cm} $^1$This longitude range is offset from the true tangency by $\sim$ 2$^{\circ}$. Our data are available only for this longitude range. \\ $^2$This identification with H$_2$ gas is used just to distinguish it from the compressed WIM. However, in addition to excitation by H$_2$ this component may include some [C\,{\sc ii}]\space excited by atomic H or electrons in the diffuse ionized medium. \\ $^3$Uncorrected for main beam efficiency.\\ $^4$From the Galactic center as determined from the V$_{LSR}$ and Galactic rotation velocity.\\ \end{table} \begin{table}[htbp] \begin{center} \caption{Derived Spiral arm structure at the Perseus, Norma and Crux tangencies} \renewcommand*{\arraystretch}{1.4} \setlength{\tabcolsep}{0.1cm} \begin{tabular} {l c c c } \hline\hline {\bf Spiral Arm} & {\bf Perseus} & {\bf Norma} & {\bf Crux}$^1$ \\ \hline \multicolumn {4}{l} {\bf Relative lane location wrt $^{12}$CO\space (pc)} \\ [C\,{\sc ii}]--WIM peak & -190 & -190 & -600\\ H\,{\sc i}\space peak &-20 & -25 & -180 \\ \hline \multicolumn {4}{l} {\bf Emission lane width$^2$ (pc)}\\ [C\,{\sc ii}]--WIM & 250 & 270 & $\sim$ 220\\ $^{12}$CO\space & 400 &400 & $\sim$ 440 \\ H\,{\sc i}\space &400 &620 & $\sim$ 640 \\ Full arm width traced by [C\,{\sc ii}]\space and $^{12}$CO\space &480 & 500& $\sim$ 940 \\ \hline \multicolumn {4}{l} {\bf WIM parameters at the tangency}\\ Width ${\Delta}$V (km s$^{-1}$) & 16 & 17 & 12\\ Intensity (K km s$^{-1}$) & 5.1 & 4.11 & 13.74\\ Path length$^3$ (kpc) & 1.31 & 1.58 & 1.72 \\ $<$n(e)$>$$^4$ (cm$^{-3}$) & 0.52 & 0.42 & 0.74 \\ \hline\hline \label{tab:Table_2} \end{tabular} \end{center} \vspace{-0.7cm} $^1$This longitude range is offset from the true tangency and therefore the parameters listed are only indicative of the trends within the arm structure.\\ $^2$FWHM as estimated using the profile on the rising side (near-side to the Galactic Center)--see text. \\ $^3$Mean path length estimated using the radial distance from the Galactic Center and the width.\\ $^4$Assuming that the [C\,{\sc ii}]\space emission here is due to C$^+$ excitation by electrons and that excitation by H atoms is negligible \citep[c.f.][]{velusamy2012,velusamy2014}. \end{table} As noted above, \citet[][]{vallee2014apjs} constructs a cross section view of the spiral arms showing where each spiral arm tracer occurs, based on a single value assigned for the longitude of each tangency for the tracers. This approach provided a useful insight into the different gas lanes in a cross-cut of the profile of the spiral arms. However in reality, as seen in the maps in Figures~\ref{fig:fig3_[CII]map} to \ref{fig:fig5_HImap}, the emissions occur over a range of longitudes and it is too complex to assign a single longitude as the tangency. In other words, all tracers have emission within a range of longitudes representing their tangency. Therefore we take a different approach to resolve and delineate the emission layers within each tangency kinematically by studying the velocity structure in each tracer spectrum. As illustrated in the cartoon of the tangency (Figure~\ref{fig:fig6_l-overview}) the emissions from different layers will show separate velocity V$_{LSR}$ due to Galactic rotation. \subsection{[C\,{\sc ii}]\space emission in the compressed WIM along the spiral arm} The spectra in Figures~\ref{fig:fig7_Norma} to \ref{fig:fig10_Crux} bring out clearly the exceptional characteristics of the [C\,{\sc ii}]\space emission at the tangencies. Namely, near the extreme low velocities (near the inner edge of the spiral arm) only [C\,{\sc ii}]\space shows an emission peak (representing the onset of the spiral feature) while there is little, or no, $^{12}$CO\space and the H\,{\sc i}\space intensity is still increasing with velocity. In contrast, both H\,{\sc i}\space and $^{12}$CO\space emission peaks appear at still higher velocities away from the inner edge. This anomalous excess [C\,{\sc ii}]\space emission in the velocity profiles is observed only for the spiral arm tangencies, where the path lengths are largest, and its detection represents the direct unambiguous identification of the large scale Galactic diffuse ionized gas (WIM) by the 158$\mu$m\space [C\,{\sc ii}]\space line. The results presented here corroborate the previous detection of the WIM in the velocity resolved HIFI data for the Scutum tangency \cite[][]{velusamy2012}. Our {\it Herschel} HIFI detection of the diffuse ionized gas provides detailed spatial and kinematic information on the nature of this gas component in the spiral arms. It has been suggested that in the WIM any contribution to the [C\,{\sc ii}]\space emission from excitation by H atoms is small and negligible \citep[][]{velusamy2012,velusamy2014}. For the WNM and WIM conditions (T$_{k}$ =8000 K; \citet{wolfire2003}) the critical density for excitation of [C\,{\sc ii}]\space by H atoms is $\sim$1300 cm$^{-3}$, and for electrons $\sim$ 45 cm$^{-3}$ \citep[see][]{goldsmith2012}. Using the H\,{\sc i}\space intensities integrated over the line width of the [C\,{\sc ii}]--WIM component and the path lengths listed in Table~\ref{tab:Table_2}, we estimate a mean H density $\langle n(H)\rangle$ $\sim$ 0.59 cm$^{-3}$ and 0.31 cm$^{-3}$ in the Perseus and Norma tangencies respectively. As shown in the case for the Scutum tangency \citep[][]{velusamy2012} such low H\,{\sc i}\space densities cannot account for the [C\,{\sc ii}]\space emission detected at the tangencies. Our interpretation that the [C\,{\sc ii}]\space emissions near the tangent velocity are the result of C$^+$ excitation by the collisions with electrons in the WIM and not with H atoms, is further corroborated by strong evidence seen in the data presented here as discussed above in Section \ref{sec:discussion}. Although the [C\,{\sc ii}]\space emission from the compressed WIM is observed only at, or near, the tangent longitudes, it is omnipresent all along the spiral arms in all directions. But, unlike the tangent direction, it is not easy to detect this enhancement in the spiral arms layers due to: (i) insufficient path length through the WIM: it is a factor of 4 to 5 smaller when viewed in any other direction than along the tangency, therefore much higher sensitivities are required to detect the weaker emission; (ii) at other longitudes, due to the viewing geometry, the velocities are blended with other components and it is difficult to disentangle the diffuse WIM emission from [C\,{\sc ii}]\space emissions from the molecular gas and PDRs. However it is possible to separate the emissions with spectral line data obtained with higher sensitivity and by making a spaxel by spaxel comparison with CO emission \citep[e.g.][]{velusamy2014}. Thus, the WIM component may add significantly to the total [C\,{\sc ii}]\space luminosity in galaxies, while being difficult to detect along the average line of sight. \subsubsection{Electron densities in the spiral arm WIM} The electron density of the WIM is an important parameter for understanding the conditions in the ISM, such as the pressure and ionization rate. To estimate the electron densities required to produce the observed [C\,{\sc ii}]\space emission in the WIM we follow the approach in \citet{velusamy2012}. At the low densities of the diffuse medium the excitation is sub-thermal and the emission is optically thin, therefore the intensity in an ionized gas is given by \cite[see Section 4 in][]{velusamy2012}, \begin{equation} \langle n(e) \rangle \sim 0.27T_3^{0.18}(I( \mbox{[C\,{\sc ii}]})/L_{kpc})^{0.5}, \end{equation} \noindent which assumes a fully ionized gas, a fractional abundance of C$^+$ with respect to the total hydrogen density, n$_t$, X(C$^+$) = 1.4$\times$10$^{-4}$, and where L$_{kpc}$ is the path length in kpc, T$_3$ is the kinetic temperature in 10$^3$ Kelvin, and \begin{equation} I( \mbox{[C\,{\sc ii}]})=\int T_A( \mbox{[C\,{\sc ii}]})dv \end{equation} \noindent is the intensity in K km s$^{-1}$. The [C\,{\sc ii}]\space intensities given in Table~\ref{tab:Table_2} are integrated over the velocity widths for the [C\,{\sc ii}]--WIM profiles. The path lengths listed in Table~\ref{tab:Table_2} are derived using the Galactocentric radial distance and the thickness of the [C\,{\sc ii}]\space emission layer, as discussed above, assuming approximately circular geometry at the tangencies. Using this approach, assuming a fully ionized gas, fractional abundance $x$(e)=n(e)/n$_t$ = 1, we calculate $\langle n(e)\rangle$ for all three tangencies and find $\langle n({\rm e})\rangle$ in the range 0.42 to 0.74 cm$^{-3}$ (see Table~\ref{tab:Table_2}) for T$_{k}$ = 8000 K. For a fully ionized gas this implies a total density $n$(H$^+$) = $n$(e). These values are strictly a lower limit if the gas is partially ionized, $x(e)<$1, but only weakly so. The densities in the WIM at the leading edge of the spiral arms are an order of magnitude higher than the average density in the disk which is dominated by the interarm gas. Our determination of the WIM density from the [C\,{\sc ii}]\space emission is several times higher than the LOS averaged densities inferred from pulsar dispersion and H$\alpha$ measurements, $n$(e) $\sim$few$\times$10$^{-2}$ cm$^{-3}$ \citep[][]{Haffner2009} and we argue that our larger mean value is a result of compression by the WIM--spiral arm interaction. \section{Summary:} \label{sec:summary} We present large scale [C\,{\sc ii}]\space spectral line maps of the Galactic plane from $l$ = 326\fdg6 to 341\fdg4 and $l$ = 304\fdg9 to 305\fdg9 observed with \textit{Herschel} HIFI using On-The-Fly scans. All maps are shown as longitude-velocity ($l$--$V$) maps at latitude, $b$= 0\fdg0, except for $l$ = 305\fdg7 to 305\fdg9 for which $b$= +0\fdg15. The [C\,{\sc ii}]\space $l$--$V$ maps along with those for H\,{\sc i}\space and $^{12}$CO, available from southern Galactic plane surveys \citep[][]{Barnes2011,McClure2005}, are used to analyze the internal structure of the spiral arms as traced by these gas layers in the the Crux ($l$ = 304\fdg9 -- 305\fdg9), Norma ($l$ = 327$^{\circ}$ -- 329\fdg5) and start of Perseus ($l$ = 336$^{\circ}$ -- 338$^{\circ}$) tangencies. Our key results are: \begin{enumerate} \item We derive the internal structure of the spiral arm features using the velocity resolved emission profiles of [C\,{\sc ii}], H\,{\sc i}, and $^{12}$CO\space averaged over each tangency. These yield the relative locations of the peak emissions of the compressed WIM, H\,{\sc i}, and molecular gas lanes, including the PDRs, and derive the width of each gas ``lane''. \item We find that [C\,{\sc ii}]\space emission has two components. At the extreme velocities beyond the tangent velocity only [C\,{\sc ii}]\space shows a peak in emission while there is little $^{12}$CO\space and H\,{\sc i}\space is weak. This [C\,{\sc ii}]\space component traces the compressed WIM and is displaced by about 9 km s$^{-1}$\space in V$_{LSR}$ corresponding to $\sim$ 200 pc towards the inner edge of the spiral arm with respect to $^{12}$CO\space emission. The second [C\,{\sc ii}]\space component is roughly coincident with $^{12}$CO\space and traces the PDRs of the molecular gas. The WIM and molecular gas components of [C\,{\sc ii}]\space are distinguished kinematically (appearing at well separated velocities around the tangent velocity). Thus, we find that in the spiral arm tangencies the [C\,{\sc ii}]\space spectral line data alone can be used to study the relative locations of the WIM and molecular gas PDR layers. \item The peak velocity of the [C\,{\sc ii}]--WIM component lies beyond the tangent velocity and corresponds to the radial distance closest to the Galactic center. Thus it is near the inner edge of the spiral arm, representing the onset of the spiral arm feature. Both H\,{\sc i}\space and $^{12}$CO\space peak emissions appear at still higher velocities corresponding to distances away from this inner edge. The $^{12}$CO\space profile thus defines the outer edge of the spiral arm. We derive the width of the spiral arm as the distance between the two extremes of the half power points in the [C\,{\sc ii}]--WIM and $^{12}$CO\space emission profiles. We estimate the spiral arm widths as $\sim$ 500 pc near the start of the Perseus arm, and for the Norma arm. \item We interpret the excess [C\,{\sc ii}]\space near the tangent velocities as shock compression of the WIM induced by the spiral density waves and as the innermost edge of spiral arms. We use the [C\,{\sc ii}]\space intensities and a radiative transfer model to determine the electron densities WIM traced by [C\,{\sc ii}]. The electron densities in the compressed WIM are $\sim$ 0.5 cm$^{-3}$, about an order of magnitude higher than the average for the disk. The enhanced electron density in the WIM is a result of compression of the WIM by the spiral density wave potential. \item Finally, we suggest that the WIM component traced by [C\,{\sc ii}]\space at the spiral arm tangencies exists all along the spiral arms in all directions, but unlike in the tangent direction it is not easy to detect because of insufficient path length of C$^+$ across the arms, and confusion due to velocities blended with other components. Thus, the WIM component along the spiral arms may add significantly to the total [C\,{\sc ii}]\space luminosity in galaxies, while being difficult to detect along the average line of sight. \end{enumerate} In this paper, we demonstrated the utility of spectrally resolved {\it Herschel} HIFI OTF scan maps of [C\,{\sc ii}]\space emission to unravel the internal structure of spiral arms using the velocity resolved spectral line profiles at the spiral arm tangencies. Our results provide direct observational evidence of the cross section view of the spiral arms in contrast to the synthetic model by \citet[][]{vallee2014apjs} using the longitude tangents as traced by different tracers. Combining [N\,{\sc ii}]\space with [C\,{\sc ii}]\space yields additional constraints \citep[e.g.][]{langer2015,Yildiz2015} and future [N\,{\sc ii}]\space spectral line maps of the spiral arms are needed to characterize fully the compressed WIM detected here, and the Galactic arm--interarm interactions. \begin{acknowledgements} We thank the staffs of the ESA {\it Herschel} Science Centre and NASA {\it Herschel} Science Center, and the HIFI, Instrument Control Centre (ICC) for their help with the data reduction routines. In addition, we owe a special thanks to Dr. David Teyssier for clarifications regarding the {\it hebCorrection} tool. This work was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. {\copyright}2015 California Institute of Technology: USA Government sponsorship acknowledged. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2015-04-15T02:01:51", "yymm": "1504", "arxiv_id": "1504.03373", "language": "en", "url": "https://arxiv.org/abs/1504.03373" }
\section{Introduction} Differential equations have a great variety of applications in different fields of science such as engineering, physics, biology, pharmacokinetics (\cite{Li:2014}). Yet, there are only a few of their applications in economics or finance. Particularly, well-known models involving differential equations are only economic growth model and Black-Scholes equation. The latter one will be discussed in the paper. In 1977, Myron Scholes together with Fischer Black were awarded a Nobel Prize in economics for the formulation of stock options formula through ``new method of determining the value of derivative'' (\cite{Jarrow:1999}). So, Black-Scholes model deals with one of the most important issues in quantitative finance pricing of options (\cite{Rodrigoa:2006}). This model has significant implications both theoretical and practical since finance plays a great role in economies around the world (\cite{Bohner:2009}). \section{Ordinary Differential Equation} \subsection{Background information and underlying assumptions} In practice, Black-Scholes model of option pricing was applied to various ``commodities and payoff structures'' (\cite{dar:2005}). Black-Scholes model is widely used for American options as well as for European options. Therefore, the model has a wide variety of applications. Before considering Black-Scholes model, there is a number of assumptions that should be made. Fischer Black calls them ``ideal condition'' of the market (\cite{Black:1973}). These assumptions are important to emphasize because it is well-known that stock markets are often volatile compared to other parts of the economy. There are five underlying assumptions: \begin{enumerate} \item First assumption that should be made is information about values of short-term rates is available and short-term interest rates are constant (\cite{Black:1973}). \item Secondly, stock pays no dividend (\cite{Black:1973}). \item Thirdly, transaction costs that occur while buying or selling securities are eliminated (\cite{Black:1973}). \item Fourthly, it is achievable to borrow fraction of price of stock one wants in order to hold the stock (\cite{Black:1973}). \item The last assumption states that short selling of a security in necessary situations is allowed in the market (\cite{Black:1973}). \end{enumerate} Thanks to these assumptions, option price will be the function of the time period and stock price only. In the following paragraphs, option price will be reduced to the function of stock price only for simplicity. Generally, option values increase when stock prices rise. A positive relationship between option value and stock price may be easily seen from the following graph (\cite{Black:1973}). As it can be seen from the figure, graphs representing the relationship between the option price and stock price at different time periods ($T_1, T_2, T_3$) lie below 45-degree line, which shows that option prices are more volatile than the stock prices (\cite{Black:1973}). The volatility of option prices lead to the following statement: if the price of the stock increases by a certain amount, greater percentage change will be generated in option prices. The graph illustrated below shows what the paper seeks to explain through Black-Scholes model. \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{f1.png} \caption{The relation between option value and stock price} \end{figure} \subsection{Transformation into an Ordinary Differential Equation} Black-Scholes equation is given by the following expression: \begin{equation*} \frac{\partial C}{\partial t} + \frac{1}{2}\sigma^2s^2\frac{\partial C^2}{\partial s^2} + rs\frac{\partial C}{\partial s} - rC=0, \end{equation*} where $C(s, t) =$ price of option, $s =$ price of the stock, $t$ = period of time, $r$ = interest rate (\cite{Company:2007}). Firstly, it is useful to transform this partial differential equation (PDE) into an ordinary differential equation (ODE) by proposing the following solution: $C(s, t) = C(s)e^{\lambda t}$. Given that ${\displaystyle \frac{\partial C}{\partial t} = C(s)*\lambda e^{\lambda t}}$ and ${\displaystyle \frac{\partial C}{\partial s} = \frac{\partial C(s)}{\partial s} e^{\lambda t}}$, by substituting these equations into the PDE we get: \begin{equation*} C(s)*\lambda e^{\lambda t} + \frac{1}{2}\sigma^2s^2\frac{d^2 C(s)}{ds^2}*e^{\lambda t} + rs\frac{dC(s)}{ds}*e^{\lambda t} - rC(s)*e^{\lambda t}=0. \end{equation*} The next step is to rearrange the equation to get second order ODE: \begin{equation*} e^{\lambda t} \left[\frac{1}{2}\sigma^2s^2\frac{d^2 C(s)}{ds^2}+ rs\frac{dC(s)}{ds}+C(s)(\lambda - r) \right] = 0. \end{equation*} The latter expression can be reduced to the following equation: \begin{equation*} \frac{1}{2}\sigma^2s^2\frac{d^2 C(s)}{ds^2}+ rs\frac{dC(s)}{ds}+C(s)(\lambda - r)=0 \end{equation*} since $e^{\lambda t} \neq 0$. \subsection{Euler equation} To get rid of the coefficient of the first term lets divide everything by 1/2$\sigma^2$: \begin{equation*} s^2\frac{d^2 C(s)}{ds^2}+ \frac{2r}{\sigma^2}*s\frac{dC(s)}{ds}+\frac{2(\lambda-r)}{\sigma^2}C(s)=0. \end{equation*} This equation reminds us the Euler equation: \begin{equation*} L(y)=x^2\frac{d^2 y}{dx^2}+\alpha x \frac{dy}{dx}+\beta y = 0. \end{equation*} with real constants $\alpha$ and $\beta$ (\cite{W.:2009}). In our case, $\alpha = \frac{2r}{\sigma^2}$ and $\beta = \frac{2(\lambda-r)}{\sigma^2}$, which are positive constants. Euler equation has the solution of the form $$y=x^{r_1}+x^{r_2}$$ in case of distinct real roots, and characteristic equation of the form: \begin{equation*} F(r) = r(r-1) + \alpha r +\beta =0 \end{equation*} (\cite{W.:2009}). \subsection{Solution of the Black-Scholes equation} By the assumption given, $\sigma$ and $r$ are positive real numbers because $r$ is an interest rate and $\sigma$ is volatility of the stock as noted earlier in the paper. Now, a solution in the form of $C(s)=s^k$ can be proposed and applied to the Black-Scholes equation. The following derivations will be useful in solving our problem: $$C(s)=s^k, \qquad \qquad \frac{dC(s)}{ds}=k*s^{k-1}, \qquad \qquad \frac{d^2C(s)}{ds^2}=k(k-1)*s^{k-2}.$$ Substituting the derivations back into the earlier equation we get: \begin{equation*} \frac{1}{2}\sigma^2s^2k(k-1)*s^{k-2}+ rsk*s^{k-1}+s^k(\lambda - r)=0. \end{equation*} The next step is to take $s^k$ out of bracket and derive characteristic equation introduced earlier: \begin{equation*} s^k* \left[\frac{1}{2}\sigma^2k(k-1)+ k(r-\frac{1}{2}\sigma^2)+(\lambda - r) \right] = 0. \end{equation*} \begin{equation*} \frac{1}{2}\sigma^2k^2+ rk+(\lambda - r)=0. \end{equation*} To find the roots of characteristic equation, let us find discriminant: \begin{equation*} D=(r-\frac{1}{2}\sigma^2)^2 - 4* \frac{1}{2}\sigma^2(\lambda - r) = r^2+r\sigma^2 +\frac{\sigma^2}{4} - 2\lambda \sigma^2 >0 \end{equation*} by assumption. So, the two distinct roots of characteristic equation will be: \begin{equation*} k_{1,2} = \frac{(r-\frac{1}{2}\sigma^2) \pm \sqrt{r^2+r\sigma^2 +\frac{\sigma^2}{4} - 2\lambda \sigma^2}}{\frac{\sigma^2}{2}*2} = \frac{r}{\sigma^2} - \frac{1}{2} \pm \frac{\sqrt{r^2+r\sigma^2 +\frac{\sigma^2}{4} - 2\lambda \sigma^2}}{\sigma^2}. \end{equation*} Therefore, the solution of our problem can be written as: \begin{equation*} C(s) = c_1s^{\frac{r}{\sigma^2} - \frac{1}{2} + \frac{\sqrt{r^2+r\sigma^2 +\frac{\sigma^2}{4} - 2\lambda \sigma^2}}{\sigma^2}} + c_2s^{\frac{r}{\sigma^2} - \frac{1}{2} - \frac{\sqrt{r^2+r\sigma^2 +\frac{\sigma^2}{4} - 2\lambda \sigma^2}}{\sigma^2}}. \end{equation*} The solution represents option value as a function of stock prices. By the assumption $c_1$ and $c_2$ must be positive constants because of a positive relationship between option price and the stock price introduced earlier. \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{f2.jpg} \caption{Fluctuations in stock prices from 2000 to 2009} \label{F} \end{figure} The figure \ref{F} above shows fluctuations in stock prices from 2000 to 2009 time period. The Black-Scholes presented in the paper is useful to explain, predict and estimate option prices based on stock prices in the financial world. Black-Scholes model gives more accurate estimates of option prices than other earlier developed models because it takes into account such factors influencing the stock prices as transaction costs, riskiness of assets, illiquid markets (\cite{Ankudinova:2008}). Therefore, the model is used to estimate European call options, which consolidates its role in applied economics (\cite{Barad:2014}). Black-Scholes model focuses on option or security that is held for a certain period of time and gives the owner right to make market operations such as buying and selling. Two types of securities can be specified: American options and European options (\cite{Black:1973}). The difference of American options from European options is its quality that allows the owner to buy or sell the security until the maturity date, whereas the latter one does not allow conducting market operations until the security matures. According to empirical tests made by Black and Scholes, estimated option values deviate slightly from what they are in practice. Although those who demand stocks and bonds pay higher (to a small degree) prices for the products in securities markets, suppliers receive payments fairly close to what the formula calculates (\cite{Black:1973}). This gap occurring between prices paid and received by demanders and suppliers may be understood by transaction costs $-$ costs associated with exchanges $-$ which occur as a result of a variety of services in an industry. \section{The essence of the Black-Scholes Equation} There are two types of options that can be specified:``American option'' and ``European option'' (\cite{Black:1973}). American options are the ones that are checkable on demand, particularly they can be returned before the maturity date. Whereas European options are the ones that can be returned only on a specific date, when they mature. So, American options are more liquid than European options. As noted earlier, an option is more valuable when the stock price is higher (\cite{Black:1973}). Also, option depends on the maturity date, particularly the date of expiration. If maturity date is over a long period of time, then payments, particularly dividends that are paid on specific periods of time for the option will be less. On the other hand, if maturity date is during a short period of time, then dividends are higher. Since Black-Scholes equation is a theoretical prediction of stock movements in the market, there are some restrictions that should be noted. Beside theory, there is a real world in which conditions in the stock market may not be that ideal as predicted by the model. Also, the reason theoretical models are built in finance and economics is that it is difficult to test or do an experiment with the real world. For example, in physics the experiments can be done inside the laboratories as well as in chemistry. However, experimenting with financial economics situations imposes high costs as they are closely connected with markets, which exist in our real world. So, making an experiment with the financial markets is difficult because of globality of problems. The only information for conducting policies or solving problems in finance is past data collection. Data in the past is similar with a tool that helps to understand general patterns of markets, movements of asset prices, behaviors of economic agents, relationships and co-movements of financial variables. With data collected in the past, it is possible to draw graphs relating the economic variables, showing their trends and movements. Also, data helps to predict future trend in markets, however only roughly. Now as we understand the importance of theoretical models, there are some conditions that should be specified regarding the Black-Scholes model. This is a complete list of assumptions additional to that noted earlier in the paper: \begin{enumerate} \item There are perfect information about short - term interest rates and their movements are constant over long period of time (\cite{Black:1973}). \item Movement of stock prices is random and its variance is proportional to the square of stock prices; therefore distribution of stock prices is lognormal over some finite period $t$ (\cite{Black:1973}). \item Unlike the real world, where stocks pay dividends to the shareholders, in the model stocks do not pay any payments (\cite{Black:1973}). \item Only European options are to be considered in the model, so they are returned on a maturity date (\cite{Black:1973}). \item Unlike in financial markets where transaction costs exist in all operations, it is assumed that buying or selling stocks do not impose any transaction costs (\cite{Black:1973}). \item Any fraction of the price of security can be borrowed at short $-$ term interest rate (\cite{Black:1973}). \item There is a possibility of short selling with no penalty or any costs. A seller will accept the price that buyer tells, agrees to meet at some specific time in the future, and pay the amount equal to that of the security price (\cite{Black:1973}). \end{enumerate} Due to these assumptions, option value will depend only on time and stock price; and other variables are taken to be constant so as to simplify our model (\cite{Black:1973}). Therefore, the value of the option $w$ reduces to the following simple function: $$w = f(x, t),$$ where $x$ is the price of the stock and $t$ is the period of time (\cite{Black:1973}). The expression above tells us how the value of the option changes if one of the variables, price of the stock or period of time, changes. The assumption that stocks pay no dividends gives an advantage of getting to more complicated problems with options. One example is under certain conditions, the formula can be applied to American options, which can be issued before maturity date (\cite{Black:1973}). \cite{Black:1973} studied this particular case in 1973. The Black-Scholes equation helps to calculate not only option value, but also more complicated assets such as warrants, which are liabilities of corporations (\cite{Black:1973}). Warrants are generally considered as options. Also, the value of corporate liabilities may be calculated via the formula. Often corporate liabilities are not viewed as options. We may consider a case of a company, which has assets of shares of another company (\cite{Black:1973}). Also, bonds are ``pure discount bonds'', which indicates a bond pays fixed amount of money (\cite{Black:1973}). Assume maturity of 10 years. Also, assume that a company has a restriction of paying no dividend until the maturity date. Then, assume the company is planning to sell all of its stocks after 10 years and pay the amount for holders of bonds (\cite{Black:1973}). So, these certain conditions let us to calculate a value of corporate liabilities using Black-Scholes equation as the assumptions make corporate liabilities similar to options. \section{Partial Differential equation} This part of our paper will greatly consist of the material from the book by \cite{Salsa:2014}. Also note that \cite{Zhexembay:2016} focus on numerical solution of nonlinear Black-Scholes equation using Finite Element Method. Let us construct a differential equation that will help us to describe the evolution of $V(s, t).$ The following hypotheses are established: \begin{itemize} \item $S$ follows a lognormal law; \item The volatility $\sigma$ is constant and known; \item There are no transaction costs or dividends; \item It is possible to buy or sell any number of the underlying asset; \item There is an interest rate $r > 0$, for a riskless investment. This means that 1-dollar in a bank at time $t = 0$ becomes $e^{rT}$ dollars at time $T$; \item The market is arbitrage free. \end{itemize} The last hypothesis is crucial in the construction of the model and means that there is no opportunity for instantaneous risk-free profit. It could be considered as a sort of conservation law for money! We can translate this principle into mathematical terms through the notion of hedging and the existence of self-financing portfolios. The fundamental idea is to calculate the return of $V$ through formula and then to build a riskless portfolio $\Pi$. This portfolio contains shares of $S$ and the option. $\Pi$ must increase at the current interest rate $r$, i.e. $d\Pi = r\Pi dt$, which turns out to coincide with the fundamental Black-Scholes equation. Now let us move to the calculation of the differential of $V$ through the means of the formula. Since \begin{equation*} dS = \mu Sdt + \sigma dB, \end{equation*} the obtained result is \begin{equation}\label{1} dV = [V_t+\sigma SV_s + 1/2\mu^2 S^2V_{SS}] dt+\sigma SV_SdB. \end{equation} In the formula (\ref{1}) we have risk term $\sigma SV_S dB$. So, our next goal will be an elimination of this term. This can be acquired by establishing a portfolio $\Pi$ consisting of the option and a quantity $-\triangle$ of underlying: $$\Pi = V - S\triangle.$$ This operation is valuable financial procedure called hedging. Now, let us turn out attention to the particular period of time, say $(t, t+dt)$ during which $\Pi$ goes through a variation $d\Pi$. If we succeed in keeping $\triangle$ equal to its value at $t$ during the interval $(t, t+dt)$, the variation of $\Pi$ is given by $$d\Pi = dV - \triangle dS.$$ Since the mentioned formula is a cornerstone of the whole construction, it should be explained properly. Implementing $\ref{1}$ the we acquire: \begin{equation}\label{2} d\Pi=dv-\triangle dS = [V_t+\mu SV_s+\frac{1}{2}\sigma^2S^2V_{ss} - \mu S\triangle]dt +\sigma S(V_s - \triangle)dB. \end{equation} Thus, if we choose \begin{equation}\label{3} \triangle =V_s, \end{equation} where $\triangle$ is the value of $V_s$ at $t$, we eliminate the stochastic component in (\ref{2}). The development of the portfolio $\Pi$ is now deterministic and its dynamics can be described by the equation: \begin{equation}\label{4} d\Pi = [V_t+\frac{1}{2}\sigma^2S^2V_{ss}]dt. \end{equation} The choice of (\ref{3}) seems inexplicable, but it can be justified by the fact that $V$ and $S$ are dependent and the random component in their dynamics is proportional to $S$. Thus, we the linear combination of $V$ and $S$ is chosen wisely, such component should vanish. Now let us implement the no-arbitrage principle. Investing $\Pi$ at the riskless rate $r$, after a time $dt$ we have an increment $r\Pi dt$. Comparison between $r\Pi dt$ and $d\Pi$ is given by (\ref{4}). If $d\Pi > r\Pi dt$, we borrow an amount $\Pi$ to invest in the portfolio. The return $d\Pi$ would be greater of the cost $r\Pi dt$, so that we make an instantaneous riskless profit $$d\Pi - r\Pi dt.$$ If $d\Pi < r\Pi dt$, we sell the portfolio $\Pi$ investing it in a bank at the rate $r$. This time we would make an instantaneous risk free profit $$r\Pi dt - d\Pi.$$ Therefore, the arbitrage free hypothesis forces \begin{equation}\label{5} d\Pi = [V_t+\frac{1}{2}\sigma^2S^2V_{ss}]dt = r\Pi dt . \end{equation} Substituting $\Pi = V - S\triangle = V-V_sS$ into (\ref{5}), we obtain famous Black-Scholes equation: \begin{equation}\label{6} \L V = V_t+\frac{1}{2}\sigma^2S^2V_{ss}+rSV_s -rV=0. \end{equation} Since the coefficient of $V_{ss}$ is positive, (\ref{6}) is a backward equation. In order to get well-posed problem, we need to impose final condition (at $t = T$), a side condition at $ S=0$ and one condition for $S \rightarrow +\infty$. \begin{itemize} \item{Final conditions $(t=T)$} Call. If at time $T$ we have $S>E$ then we exercise the option, with a profit $S-E$. If $S \le E$, we do not exercise the option with no profit. The \textit{final payoff} of the option is therefore $$C(S, T) = \text{max}[S-E, 0] = (S-E)^+, S>0.$$ Put. If at time $T$ we have $S \le E$, we do not exercise the option, while we exercise the option if $S < E$. The \textit{final payoff} of the option is therefore $$P(S,T) = \text{max}[E-S, 0] = (E-S)^+, S>0.$$ \item{Boundary conditions ($S=0$ and $S \rightarrow +\infty$)} Call. If $S=0$ at a time $t$, $S=0$ thereafter, and the option has no value; thus $$C(0, t) = 0, t\ge 0.$$ As $S \rightarrow +\infty$, at time $t$, the option will be exercised and its value becomes practically equal to $S$ minus the discounted exercise price, so $$C(S,t) - (S-e^{-r(T-t)}E) \rightarrow 0 \ \text{as} \ S \rightarrow +\infty.$$ Put. If at a certain time is $S=0$, so that $S=o$ thereafter, the final profit is $E$. Thus, to determine $P(0,t)$ we need to determine the present value of $E$ at time $T$, that is $$P(S, t) = Ee^{-r(T-t)}.$$ If $S \rightarrow +\infty$, we do not exercise the option, hence $$P(S, t) = 0 \ \text{as} \ S \rightarrow +\infty.$$ \end{itemize} \textit{Solution of the Black-Scholes equation}\medskip Let us summarize our model in the two cases.\medskip \begin{itemize} \item{Black-Scholes equation} \begin{equation}\label{BS} V_t+\frac{1}{2}\sigma^2S^2V_{ss}+rSV_s -rV=0. \end{equation} \item{Final payoffs} $$C(S, T) = (S-E)^+ \qquad \text{(call)} $$ $$P(S,T) =(E-S)^+ \qquad \text{(put)}. $$ \item{Boundary conditions} $$C(S,t) - (S-e^{-r(T-t)}E) \rightarrow 0 \qquad \qquad \text{as} \qquad S \rightarrow +\infty \qquad \text{(call)}$$ $$P(0, t) = Ee^{-r(T-t)}, \qquad P(S, t) = 0 \qquad \text{as} \qquad S \rightarrow +\infty \qquad \text{(put)}.$$ \end{itemize} The problem above can be simplified to a global Cauchy problem for the heat equation. Thus, the explicit formulas for the solutions can be obtained. Firstly, a change of variables should be performed so as to reduce the Black-Scholes equation to constant coefficients and to pass from backward to forward in time. Note also that $1/ \sigma^2$ can be considered an intrinsic reference time while the exercise price $E$ gives a characteristic order of magnitude for $S$ and $V$. Thus, $1/ \sigma^2$ and $E$ can be used as rescaling factors to introduce a dimensionless variable. Let us set $$x= \ln \left(\frac{S}{E} \right), \qquad \tau = \frac{1}{2} \sigma^2 (T-t), \qquad \omega(x, \tau) = \frac{1}{E}V(Ee^x, T-\frac{2\tau}{\sigma^2}).$$ When $S$ goes from $0$ to $=\infty$, $x$ varies from $-\infty$ to $+\infty$. When $t=T$ we have $\tau = 0$. Moreover: $$V_t = -\frac{1}{2}\sigma^2E\omega_\tau$$ $$V_s = \frac{E}{S}\omega_x, V_{ss} = -\frac{E}{S^2}\omega_x+\frac{E}{S^2}\omega_{xx}.$$ If we put this into (\ref{BS}) we will end up with the following: $$-\frac{1}{2}\sigma^2\omega_\tau+\frac{1}{2}(-\omega_x+\omega_{xx}) +r\omega_x - r\omega = 0$$ or $$\omega_\tau=\omega_{xx} +(k-1)\omega_x - k\omega$$ where $k=\frac{2r}{\sigma^2}$ is a parameter with no dimension. If we set $$\omega(x, \tau) = e^{-\frac{k-1}{2}x - \frac{(k+1)^2}{4}\tau}\nu(x, \tau),$$ we find that $\nu$ satisfies $$\nu_\tau - \nu_{xx} = 0, \ -\infty<x< +\infty, \ 0 \le \tau \le T.$$ It is worth mentioning that the final condition for $V$ is an initial condition for $\nu$. Performing following several steps we find out that $$\nu(x, 0) = g(x) = \left\{ \begin{array}{l} \displaystyle{e^{\frac{1}{2}(k+1)x} - e^{\frac{1}{2}(k-1)x},} \ x>0\\ 0, \ x \le 0 \end{array} \right.$$ for the call option, and $$\nu(x, 0) = g(x) = \left\{ \begin{array}{l} \displaystyle{e^{\frac{1}{2}(k-1)x} - e^{\frac{1}{2}(k+1)x},} \ x<0\\ 0, \ x \ge 0 \end{array} \right.$$ for the put option. Now we can use the preceding results to derive the formula of the solution. The solution is unique and it is given by formula $$\nu(x, \tau) = \frac{1}{\sqrt{4\pi \tau}} \int_R^\infty g(y)e^{-\frac{(x-y)^2}{4\tau}}dy.$$ To have a more general formula, let $y=\sqrt{2\tau z}+x$. Then, focusing on the call option: \begin{eqnarray*} \nu(x, \tau) &=& \frac{1}{\sqrt{4\pi \tau}} \int_R^\infty g(\sqrt{2\tau z}+x)e^{\frac{-x^2}{2}}dy \\ &=& \frac{1}{\sqrt{2\pi}} \left[ \int_{-\frac{x}{\sqrt{2\tau}}}^\infty e^{\frac{1}{2}(k+1)(\sqrt{2\tau z}+x) - \frac{1}{2}z^2}dz - \int_{-\frac{x}{\sqrt{2\tau}}}^\infty e^{\frac{1}{2}(k-1)(\sqrt{2\tau z}+x) -\frac{1}{2}z^2}dz \right]. \end{eqnarray*} After modifying those two integrals, we get $$\nu(x, t) = e^{\frac{1}{2}(k+1)x+\frac{1}{4}(k+1)^2 \tau} N(d_+) -e^{\frac{1}{2}(k-1)x+\frac{1}{4}(k-1)^2 \tau} N(d_-) $$ where $$N(z)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^z e^{-\frac{1}{2}y^2} dy$$ is the distribution of a standard normal random variable and $$d_\pm=\frac{x}{\sqrt{2\tau}}+\frac{1}{2}(k\pm 1)\sqrt{2\tau}.$$ Returning to the original variables, for the call we have: $$C(S, t) = SN(d_+) - Ee^{-r(T-t)}N(d_-)$$ with $$d_\pm=\frac{\ln(S/E) +(r \pm \frac{1}{2}\sigma^2)(T-\tau)}{\sigma \sqrt{T-\tau}}.$$ The formula for the put is $$P(S, t) = Ee^{-r(T-t)}N(d_-) - SN(d_+).$$ It can be shown that $$\triangle = C_s = N(d_+) > 0 \quad \qquad \text{for the call} $$ $$\triangle = P_s = N(d_+) - 1 < 0 \qquad \text{for the put} \ $$ (\rm \cite{Salsa:2014}). We should pay particular attention to the fact that both $C_s$ and $P_s$ are strictly increasing with respect to $S$. Thus, the functions $C, P$ are strictly convex functions of $S$, for every $t$, namely $C_{ss} > 0$ and $P_{ss} > 0$. \begin{itemize} \item Put-call parity. Put and call options with the same exercise price and expiry time can be connected by forming the following portfolio: \end{itemize} $$\Pi = S+P-C$$ where the minus in front of $C$ shows \textit{short position} (negative holding). For this portfolio the final payoff is $$\Pi (S, T) = S+(E-S)^+ - (S-E)^+.$$ If $E \ge S$, we have $$\Pi (S, T) = S+(E-S) - 0 = E,$$ while if $E \le S$ $$\Pi (S, T) = S+ 0 - (S-E) = E.$$ Therefore, at expiry, the payoff is always equal to $E$ and it forms a riskless profit, whose value at $t$ must be equal to the discounted value of $E$, since the no-arbitrage condition was imposed. So, we find the subsequent relation (\textit{put-call parity}) \begin{equation}\label{PC} S+P-C = Ee^{-r(T-t)}. \end{equation} Formula (\ref{PC}) reveals that, with the value of $C$ (or $P$) available, the value of $P$ (or $C$) can be obtained. From (\ref{PC}), since $Ee^{-r(T-t)} \le E$ and $P \ge 0$, we get $$C(S, t) = S+P - Ee^{-r(T-t)} \ge S-E$$ and since $C \ge 0,$ $$C(S, t) \ge (S-E)^+.$$ It can be observed that the value of $C$ is always greater than the final payoff. However, this property does not hold for a put. In fact, $$P(0, t) = Ee^{-r(T-t)} \le E$$ so the value of $P$ is less than the final payoff when $S$ approaches $0$, and it is greater just before expiring. The figures \ref{EC} and \ref{EP} demonstrate that. \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{f3.png} \caption{The value for the European call option} \label{EC} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{f4.png} \caption{The value function of the European put option} \label{EP} \end{figure} \begin{itemize} \item Different volatilities. The comparison between the value of two options with the different volatilities $\sigma_1$ and $\sigma_2$ can be conducted through the means of the maximum principle. Let us assume that the exercise price and strike time are the same for the both cases, $E$ being the exercise price and $T -$ strike time. Another assumption that we allow is that $\sigma_1 > \sigma_2$. Denote the values of the respective call options $C^{(1)}, C^{(2)}$. With decreasing amount of risk the value of the option should decline as well. What we want to show is that \end{itemize} $$C^{(1)} > C^{(2)}, \quad S>0, \quad 0 \le t \le T.$$ Let $W = C^{(1)} - C^{(2)}$ . Then \begin{equation}\label{W} W_t +\frac{1}{2}\sigma^2S^2W_{ss}+rSW_s -rW=\frac{1}{2}(\sigma_2^2 - \sigma_1^2)S^2C_{ss}^{(1)}. \end{equation} with $W(S, T) = 0, \ W(0,t ) = 0$ and $W \rightarrow 0$ as $S \rightarrow +\infty.$ It is obvious that (\ref{W}) is a nonhomogeneous equation, with the right hand side being negative for $S>0$, since $ C_{ss}^{(1)} > 0$. We know that $W$ is continuous in the half strip $[0,+\infty) \times [0, T]$ and disappears at infinity, it reaches its global minimum at $(S_0, t_0)$. We claim that the minimum is zero and cannot be obtained at a point in the intervals $(0,+\infty) \times [0, T)$. The equation is backward, so $t_0= 0$ is not considered. Assume that $W(S_0, t_0) \le 0$ with $S_0 > 0$ and $0<t_0<T.$ Thus, $$W_t(S_0, t_0) = 0$$ and $$W_s(S_0, t_0) = 0, \qquad W_{ss}(S_0, t_0) \ge 0.$$ Substituting $S=S_0, \ t=t_0$ into (\ref{W}) we observe a contradiction. Thus, $W = C^{(1)} - C^{(2)} > 0$ for $S>0, \ 0<t<T$. In 1972, empirical tests on call-options by Fischer Black and Myron Scholes were done (\cite{Black:1973}).The results of the tests show that actual prices at which agents of the economy buy and sell options deviate systematically from the prices predicted by the Black- Scholes model (\cite{Black:1973}). Options are bought at consecutively higher prices from those predicted by the model, whereas options are sold approximately at the prices predicted by the model (\cite{Black:1973}). It should be noted that the difference in prices paid by option buyers are higher for lower-risk stocks than for higher-risk stocks (\cite{Black:1973}). The latter point makes sense because low-risk stocks are always preferable than high-risk stocks since the probability that low-risk stocks generate large profits and do not default are higher. For option buyers, there are high transaction costs involved in the real world. This fact might explain why formula underestimates the price paid by option buyers because the model is built under assumption of no transaction costs. According to \cite{Black:1973}, getting into account the magnitude of the transaction costs in the market, misestimation of the prices does not indicate potential profit opportunities for speculators in the market. \section{Separation of variables method} This section considers the partial Black-Scholes equation of the form $$\frac{\partial C}{\partial t}+\frac{1}{2}\sigma^2s^2\frac{\partial^2 C}{\partial s^2} +rs\frac{\partial C}{\partial s} = 0.$$ The aim of the section is to introduce separation of variables method in order to solve the equation and find the general solution. Let the function $C(s, t)$ to be written in the following form: $$C(s, t)=S(s)T(t).$$ Then, the following partial derivatives can be derived: $$\frac{\partial C}{\partial t} = ST',$$ $$\frac{\partial C}{\partial s} = TS',$$ $$\frac{\partial^2 C}{\partial t^2} = S''T.$$ Substituting the derivatives back to the original equation, we get: $$ST'+\frac{1}{2}\sigma^2s^2S''T+rsS'T-rST= 0.$$ For simplicity, let $\frac{1}{2}\sigma^2 = a$ and $r=b$ for now. Dividing the equation by $ST,$ $$as^2\frac{S''}{S} +bs\frac{S'}{S}+\frac{T'}{T} - b= 0.$$ Rearranging and equating to a constant ($c>0$), we get pair of two ordinary differential equations: $$as^2\frac{S''}{S} +bs\frac{S'}{S}=b-\frac{T'}{T} = c.$$ The first equation $$as^2S'' +bsS' = cS$$ can be solved by Euler equation method introduced earlier. Dividing both sides of the equation by $a$ and rearranging: $$s^2S'' +\frac{b}{a}sS' - \frac{c}{a}S = 0.$$ Characteristic equation is therefore: $$d(d-1) +\frac{b}{a}d - \frac{c}{a} = 0$$ or $$d^2 +(\frac{b}{a} -1)d - \frac{c}{a} = 0.$$ The characteristic equation has the following solutions (assuming $D > 0$): $$d_{1,2} = \frac{(1-\frac{b}{a}) \pm \sqrt{(\frac{b}{a} -1)^2 - 4*1*(-\frac{c}{a})}}{2}.$$ Therefore, the general solution of the equation is $$S(s) = s^{\frac{(1-\frac{b}{a}) + \sqrt{(\frac{b}{a} -1)^2 - 4(-\frac{c}{a})}}{2}} +s^{\frac{(1-\frac{b}{a}) - \sqrt{(\frac{b}{a} -1)^2 - 4(-\frac{c}{a})}}{2}}.$$ Substituting respective expressions instead of $a$ and $b$, we get the solution in explicit form: $$S(s) = s^{\frac{(1-\frac{2r}{\sigma^2}) + \sqrt{(\frac{2r}{\sigma^2}-1)^2 - 4(-\frac{2c}{\sigma^2})}}{2}} +s^{\frac{(1-\frac{2r}{\sigma^2}) - \sqrt{(\frac{2r}{\sigma^2}-1)^2 - 4(-\frac{2c}{\sigma^2})}}{2}}.$$ Next, the aim is to find the solution of $T(t)$ from: $$b-\frac{T'}{T} = c.$$ $$T' - (b-c)T = 0.$$ So, the solution of the equation is therefore: $$T(t) = e^{(b-c)t}.$$ Since the two solutions of the form $S(s)$ and $T(s)$ are found, the final solution of the Black-Scholes equation $$\frac{\partial C}{\partial t}+\frac{1}{2}\sigma^2s^2\frac{\partial^2 C}{\partial s^2} +rs\frac{\partial C}{\partial s} = 0$$ can be expressed as $$C(s, t)=S(s)T(t) = \Bigl[s^{\frac{(1-\frac{2r}{\sigma^2}) + \sqrt{(\frac{2r}{\sigma^2}-1)^2 + 2\frac{c}{\sigma^2}}}{2}} +s^{\frac{(1-\frac{2r}{\sigma^2}) - \sqrt{(\frac{2r}{\sigma^2}-1)^2 +2\frac{c}{\sigma^2}}}{2}}\Bigr] e^{(b-c)t}.$$ \begin{itemize} \item{Constructing a plot} \end{itemize} Nowadays, there are lots of opportunities to see the graphical solution of the Black-Scholes equation. We decided to rely on one of them and show the plot of this equation. Statistic Online Computational Research (SOCR) allowed us to construct these two graphs. In the graph \ref{CO} the Exercise Price $E$ was taken to be equal to 100, Interest Rate $r= 0.5$, Dividend Rate $\delta = 0$, Variance $\sigma = 0.3$ and Time to Expiry $T-t= 1$. The graph was plotted with respect to the Stock Price $S$ and Price $V$. On the graph \ref{PO} the values of the variables were assumed to be the same and the plot was made according to the Put option. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{5.png} \caption{Call option} \label{CO} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{6.png} \caption{Put option} \label{PO} \end{figure} \section{Conclusion} To conclude, Black-Scholes model is highly appreciated in quantitative finance because of its accurate and useful estimation of stock prices. Black-Scholes equation represents derivation of option pricing though taking into account such factors as time period $t$, risk-free interest rate $r$ and volatility of stock prices $\sigma$ (\cite{Sheraza:2014}). Derived solution for the option value is closely related to corporate liabilities, therefore, the formula derived may be used to securities, including common stock and bond (\cite{Black:1973}). This feature of Black-Scholes model illustrates its flexibility and efficiency of being applied to different contexts in the financial world. In this paper, we proposed a new method of solving the famous Black-Scholes Equation. Separation of variables method was used to derive a solution to the partial differential equation. \small{ \bibliographystyle{elsarticle-harv}
{ "timestamp": "2016-12-30T02:08:09", "yymm": "1504", "arxiv_id": "1504.03074", "language": "en", "url": "https://arxiv.org/abs/1504.03074" }
\section{Introduction} Let $E(z,s)$ be the usual real-analytic Eisenstein series for the group $\Gamma = PSL_2(\ensuremath{\mathbb Z})$. \begin{mytheo} Let $K$ be a fixed compact subset of $\ensuremath{\mathbb H}$. Then for $T \geq 1$, \begin{equation} \max_{z \in K} |E(z, 1/2 + iT)| \ll_{K, \varepsilon} T^{3/8 + \varepsilon}. \end{equation} \end{mytheo} For comparison, Iwaniec and Sarnak showed, for $u_j$ a Hecke-Maass cusp form of spectral parameter $t_j$, that \begin{equation} \max_{z \in K} |u_j(z)| \ll_K t_j^{5/12 + \varepsilon}. \end{equation} The sup norm problem has been an active area and now there exist non-trivial estimates for cusp forms of large level, on higher rank groups, and for half-integral weight forms \cite{BlomerHolowinsky} \cite{Templier1} \cite{HarcosTemplier1} \cite{HarcosTemplier2} \cite{Templier2} \cite{HRR} \cite{BlomerPohl} \cite{BlomerMaga} \cite{Marshall} \cite{Kiral}. Nevertheless, the basic estimate for $\Gamma = PSL_2(\ensuremath{\mathbb Z})$ in the eigenvalue aspect has not been improved. The case of Eisenstein series seems to have been largely neglected up to now, at least for the sup norm problem, but not for some other norms: Luo and Sarnak proved QUE for Eisenstein series \cite{LuoSarnakQUE}, Spinu estimated the $L^4$ norm \cite{Spinu}, and the author has investigated QUE for geodesic restrictions \cite{Young}. The Eisenstein series case is similar in some ways to the cuspidal case, but has some technical problems because of the constant term in the Fourier expansion. Actually, the main impetus for this note was the realization that one can choose an efficient amplifier for the Eisenstein series, which leads to the improved exponent compared to the cusp form case (see \cite[Remark 1.6]{IwaniecSarnak}). \section{Notation and summary of the results of Iwaniec and Sarnak} For $\text{Re}(s) > 1$, let \begin{equation} \label{eq:EisensteinDef} E(z,s) = \sum_{\gamma \in \Gamma_{\infty} \backslash \Gamma} (\text{Im}(\gamma z))^s. \end{equation} As shorthand, we sometimes write $E_t(z) = E(z, 1/2 + it)$. The Fourier expansion states \begin{equation} \label{eq:Fourier} \zeta^*(2s) E(z,s) = \zeta^*(2s) y^s + \zeta^*(2(1-s)) y^{1-s} + 2 \sqrt{y} \sum_{n \neq 0} \tau_{s-\frac12}(n) e(nx) K_{s-\ensuremath{ \frac{1}{2}}}(2 \pi |n| y), \end{equation} where $\tau_{w}(n) = \sum_{ab = |n|} (a/b)^w$, and \begin{equation} \zeta^*(2s) = \pi^{-s} \Gamma(s) \zeta(2s). \end{equation} The Fourier expansion implies the functional equation $\zeta^*(2s) E(z,s) = \zeta^*(2(1-s)) E(z, 1-s)$. Specializing to $s=1/2 + it$, and setting $\varphi(s) = \zeta^*(2(1-s))/\zeta^*(2s)$, one obtains \begin{equation} E_t(z) = y^{1/2 + it} + \varphi(1/2+it) y^{1/2-it} + \frac{2 \sqrt{y}}{\zeta^*(1+2it)} \sum_{n \neq 0} \tau_{it}(n) e(nx) K_{it}(2 \pi |n| y). \end{equation} Let $\alpha_n$ be an arbitrary sequence of complex numbers. The main technical result proved by Iwaniec and Sarnak \cite[(A.12)]{IwaniecSarnak} is \begin{multline} \label{eq:IwaniecSarnakBound} \sum_{T \leq t_j \leq T+1} \Big| \sum_{n \leq N} \alpha_n \lambda_j(n) \Big|^2 |u_j(z)|^2 + \int_T^{T+1} \Big| \sum_{n \leq N} \alpha_n \tau_{it}(n) \Big|^2 |E_t(z)|^2 dt \\ \ll T^{\varepsilon} \Big(T\sum_{n \leq N} |\alpha_n|^2 + T^{1/2} (N + N^{1/2} y) \Big(\sum_{n \leq N} |\alpha_n| \Big)^2 \Big). \end{multline} Here $\lambda_j(n)$ are the Hecke eigenvalues of the Hecke-Maass cusp forms $u_j$, scaled so the Ramanujan-Petersson conjecture is $|\lambda_j(n)| \leq \tau_0(n)$. The implied constant is uniform in $y$ for $y \gg 1$. We have two main problems to overcome to obtain a bound on $E_t$. The first problem is to relate a pointwise bound on $E_t$ to an integral bound of the type occuring in \eqref{eq:IwaniecSarnakBound}. We are able to accomplish this by modifying a method of Heath-Brown \cite[Lemma 3]{HeathBrown}. This shows, roughly, that $|E_T(z)|^2 \lessapprox \int_{T-1}^{T+1} |E_t(z)|^2 dt$ (see Corollary \ref{coro:EpointwiseIntegral} below for the true result). Normally one constructs an amplifier to be large at a specified point. Because of the above relationship between the integral of $|E_t|^2$ and the pointwise bound, we cannot simply choose $\alpha_n$ to be large at a single value of $t$. Rather, we need the amplifier to be large on an interval of $t$'s of length $\gg T^{-\varepsilon}$. This we accomplish with Lemma \ref{lemma:amplifier} below. \section{Preliminary estimates} For purposes of comparison it is helpful to record the effects of trivially bounding the Eisenstein series using the Fourier expansion. Let $F(z,s) = E(z,s) - y^s - \varphi(s) y^{1-s}$. \begin{mylemma} \label{lemma:Fbound} For $t \geq 1$, and $y \gg 1$, we have \begin{equation} \label{eq:FourierExpansionBound} F(z, 1/2 + it) \ll (t/y)^{1/2} \log^2 t + y^{1/2} t^{-1/3 + \varepsilon}, \end{equation} and therefore, \begin{equation} \label{eq:FourierExpansionBound2} E(z, 1/2 + it) \ll y^{1/2} + (t/y)^{1/2} \log^2 t. \end{equation} \end{mylemma} This is analogous to \cite[Proposition 6.2]{Templier2}. \begin{proof} Suppose that $t \geq 1$. By Stirling's formula, \begin{equation} F(z, 1/2 + it) \ll \frac{\sqrt{y}}{|\zeta(1+2it)|} \sum_{n=1}^{\infty} \tau_0(n) |K_{it}(2 \pi n y)| \cosh(\pi t/2). \end{equation} Next we need uniform bounds on the $K$-Bessel function which we extract from the uniform asymptotic expansions due to Balogh \cite{Balogh}: \begin{equation} \label{eq:KBesselBalogh} \cosh(\pi t/2) K_{it}(u) \ll \begin{cases} t^{-1/4} (t -u)^{-1/4}, \quad &\text{if } 0 < u < t - C t^{1/3} \\ t^{-1/3}, \quad &\text{if } |u-t| \leq C t^{1/3}, \\ u^{-1/4} (u -t)^{-1/4} \exp\Big(- c (\frac{u}{t})^{3/2} \big(\frac{u-t}{t^{1/3}}\big)^{3/2} \Big), \quad &\text{if } u > t + C t^{1/3}. \end{cases} \end{equation} We break up the sum over $n$ according to the different pieces. For instance, the range $2 \pi n y \leq \frac{t}{2}$ gives a bound \begin{equation} \label{eq:FourierExpansionBulkBound} \frac{\sqrt{y}}{|\zeta(1+2it)|} \sum_{n \ll t/y} \frac{\tau_0(n)}{t^{1/2}} \ll (t/y)^{1/2} \log^2 t, \end{equation} using $|\zeta(1+2it)|^{-1} \ll \log t$. Similarly, the range $|2\pi ny - t| \asymp \Delta$ with $t^{1/3} \ll \Delta \ll t$ gives \begin{equation} \frac{\sqrt{y}}{|\zeta(1+2it)|} (t\Delta)^{-1/4} \sum_{n = \frac{t}{2 \pi y} + O(\frac{\Delta}{ y})} \tau_0(n) \ll \frac{\sqrt{y} \log t}{(t\Delta)^{1/4}} \Big(\frac{\Delta}{y} \log t + t^{\varepsilon} \Big), \end{equation} using Shiu's bound on the divisor function in a short interval \cite{Shiu}. These terms then give the same bound as \eqref{eq:FourierExpansionBulkBound}, plus an additional term that is \begin{equation} \label{eq:transitionbound} \ll y^{1/2} t^{-1/3 + \varepsilon}. \end{equation} It is easily checked that the transition range also leads to \eqref{eq:transitionbound}. The range with $u > t + C t^{1/3}$ is even easier to bound because there is an additional exponential factor aiding the estimation. In all, this shows \eqref{eq:FourierExpansionBound}. The bound \eqref{eq:FourierExpansionBound2} is immediate from \eqref{eq:FourierExpansionBound}. \end{proof} We also record some bounds on $F(z,s)$ and $E(z,s)$ valid not just for $\text{Re}(s) = 1/2$. In principle, a similar approach to the proof of Lemma \ref{lemma:Fbound} should apply, however the author has been unable to find sharp estimates analogous to \eqref{eq:KBesselBalogh} for $K_{\sigma+it}$ with $\sigma > 0$, and so we present some weaker bounds that suffice for our later application. \begin{mylemma} If $\text{Re}(s) > 1$, then \begin{equation} \label{eq:Fbound} F(z,s) \ll_{\sigma} y^{1-\sigma}, \end{equation} uniformly for $y \gg 1$, $ t \in \ensuremath{\mathbb R}$. If $\text{Re}(s) \geq 1/2 - \frac{c}{\log(2+|t|)}$, for a sufficiently small $c > 0$, then \begin{equation} \label{eq:FboundCriticalStrip} F(z,\sigma + it) \ll_{\sigma, \varepsilon} (1+|t|)^{1+\varepsilon} y^{-\sigma}. \end{equation} Moreover, if $y \gg t^{1+\varepsilon}$, and $t$ is sufficiently large, then \begin{equation} \label{eq:FboundBeyondT} F(z,\sigma + it) \ll_{\sigma, \varepsilon} (yt)^{-100}. \end{equation} \end{mylemma} \begin{proof} We begin with \eqref{eq:Fbound}. Using the original definition \eqref{eq:EisensteinDef}, we have \begin{equation} F(z,s) = - \varphi(s) y^{1-s} + \sum_{\substack{\gamma \in \Gamma_{\infty} \backslash \Gamma \\ \gamma \neq 1}} (\text{Im}(\gamma z))^s, \end{equation} and so by a trivial bound, \begin{equation} |F(z,s)| \leq |\varphi(s)| y^{1-\sigma} + \sum_{\substack{\gamma \in \Gamma_{\infty} \backslash \Gamma \\ \gamma \neq 1}} (\text{Im}(\gamma z))^\sigma = |\varphi(s)| y^{1-\sigma} + (E(z,\sigma) - y^{\sigma}). \end{equation} By the Fourier expansion, and using the standard bound \begin{equation} \label{eq:KBesselFixedOrderLargex} K_{\alpha}(x) \ll_{\alpha} x^{-1/2} e^{-x}, \end{equation} for $x \gg 1$, we have \begin{equation} \label{eq:Ezsigmabound} E(z,\sigma) - y^{\sigma} \ll_{\sigma} y^{1-\sigma} + \sum_{n =1}^{\infty} \tau_{\sigma-\ensuremath{ \frac{1}{2}}}(n) n^{-1/2} \exp(-2 \pi n y) \ll_{\sigma} y^{1-\sigma}, \end{equation} since the infinite sum in \eqref{eq:Ezsigmabound} is $\ll_{\sigma} \exp(-2\pi y)$. Furthermore, note that for $\sigma > 1$, we have \begin{equation} \varphi(\sigma + it) = \frac{\zeta^*(2-2\sigma - 2it)}{\zeta^*(2\sigma + 2it)} = \frac{\zeta^*(-1+2\sigma + 2it)}{\zeta^*(2\sigma + 2it)} \ll_{\sigma} \frac{\big|\Gamma(\sigma-\frac12+it)|}{\big|\Gamma(\sigma +it)|} \ll_{\sigma} (1 + |t|)^{-1/2}. \end{equation} Combining the above estimates, we derive \eqref{eq:Fbound}. Next we show \eqref{eq:FboundCriticalStrip} and \eqref{eq:FboundBeyondT}. By the Fourier expansion \eqref{eq:Fourier}, we have \begin{equation} \label{eq:FabsolutevaluewithFourier} |F(z,s)| = \Big|\frac{2 y^{1/2}}{\pi^{-s} \Gamma(s) \zeta(2s)} \sum_{n \neq 0} \tau_{s-\ensuremath{ \frac{1}{2}}}(n) e(nx) K_{s-\frac12} (2 \pi |n| y) \Big|. \end{equation} We will momentarily show that if $t \gg 1$, then \begin{equation} \label{eq:KBesselBound} \frac{K_{\sigma + it}(y)}{\Gamma(\frac12 + \sigma + it)} \ll_{\sigma, \lambda} y^{-\sigma} (t/y)^{\lambda}, \end{equation} where $\lambda > 0$ may be chosen at will. If $t \ll 1$, then \eqref{eq:KBesselFixedOrderLargex} gives an even stronger bound than \eqref{eq:KBesselBound} (one should note that the implied constant in \eqref{eq:KBesselFixedOrderLargex} depends continuously on $\alpha$). Now we insert this bound into \eqref{eq:FabsolutevaluewithFourier}, giving \begin{equation} |F(z,\sigma + it)| \ll_{\sigma,\lambda} \frac{y^{1-\sigma-\lambda} t^{\lambda}}{|\zeta(2\sigma + 2it)|} \sum_{n=1}^{\infty} n^{\sigma-\frac12} \tau_0(n) n^{\frac12-\sigma-\lambda} \ll_{\sigma,\lambda} \frac{y^{1-\sigma} (t/y)^{\lambda}}{|\zeta(2\sigma + 2it)|}, \end{equation} where we hereby assume $\lambda > 1$ to ensure convergence. If $y \gg t^{1+\varepsilon}$ we may take $\lambda$ very large to give \eqref{eq:FboundBeyondT}. In any case, we may take $\lambda = 1 + \varepsilon$ with $\varepsilon > 0$ small, giving \eqref{eq:FboundCriticalStrip}. Finally, we show \eqref{eq:KBesselBound}. By \cite[17.43.18]{GR}, we have \begin{equation} \frac{4 K_{\nu}(y)}{\Gamma(\frac12 + \nu)} = \frac{1}{2 \pi i} \int_{(\delta)} (y/2)^{-w} \frac{\Gamma(\frac{w+\nu}{2}) \Gamma(\frac{w-\nu}{2})}{\Gamma(\frac12 + \nu)} dw, \end{equation} where $\delta > |\text{Re}(\nu)|$. By Stirling, we have with $w = \delta + iv$ and $\nu = \sigma + it$, that \begin{equation} \frac{\Gamma(\frac{w+\nu}{2}) \Gamma(\frac{w-\nu}{2})}{\Gamma(\frac12 + \nu)} \ll_{\sigma,\delta} t^{-\sigma} (1 + |v+t|)^{\frac{\delta + \sigma-1}{2}} (1 + |v-t|)^{\frac{\delta - \sigma-1}{2}} \exp(-\tfrac{\pi}{2} q(v,t)), \end{equation} where $q(v,t) = 0$ for $|v| \leq |t|$, and $q(v,t) = |v-t|$ for $|v| \geq |t|$. Therefore, \begin{equation} \frac{K_{\sigma+it}(y)}{\Gamma(\frac12 + \sigma + it)} \ll y^{-\delta} t^{-\sigma} \int_{-\infty}^{\infty} (1 + |v+t|)^{\frac{\delta + \sigma-1}{2}} (1 + |v-t|)^{\frac{\delta - \sigma-1}{2}} \exp(-\tfrac{\pi}{2} q(v,t)) dv. \end{equation} By symmetry, suppose $v,t \geq 0$. The part of the integral with $0 \leq v \leq t$ gives a bound \begin{equation} \label{eq:BesselBoundIntegralPart} y^{-\delta} t^{-\sigma + \frac{\delta + \sigma-1}{2}} \int_0^{t} (1 + v)^{\frac{\delta - \sigma-1}{2}} dv \ll y^{-\delta} t^{-\sigma + \frac{\delta + \sigma-1}{2} + \frac{\delta-\sigma+1}{2} } = y^{-\delta} t^{\delta -\sigma}. \end{equation} We shall choose $\delta = \sigma + \lambda$ with $\lambda > 0$ arbitrary, which gives a bound consistent with \eqref{eq:KBesselBound}. The part of the integral with $v \geq t$ is easier to bound, aided by the exponential decay that is present in this range. Firstly, one may note that the range with $t \leq v \leq 2t$ gives precisely the same bound as \eqref{eq:BesselBoundIntegralPart}, even if we only use the very crude bound $q(v,t) \geq 0$. For the range with $v \geq 2t$, then we have the bound \begin{equation} y^{-\delta} t^{-\sigma} \int_{v \geq 2t} v^{\delta-1} \exp(-\tfrac{\pi}{2}(v-t)) dv \ll_{\delta} y^{-\delta} \exp(-\pi t/2). \qedhere \end{equation} \end{proof} \section{A pointwise bound via an integral bound} \begin{mylemma} \label{lemma:FpointwiseIntegral} Suppose $y, T \gg 1$ and $y \ll T^{100}$. Then \begin{equation} \label{eq:FpointwiseIntegral} |F(z,1/2 + iT)|^2 \ll \frac{\log^5 T}{y} + \log^5 T \int_{|r| \leq 4 \log T} |F(z, 1/2 + iT + ir)|^2 dr. \end{equation} \end{mylemma} The proof is analogous to that of Heath-Brown \cite{HeathBrown}, but some modifications are necessary since the Eisenstein series is not a Dirichlet series, and $z$ is an additional parameter to control. The key idea is to work with $F(z,s)$, and to deduce analogous results for $E(z,s)$ only at the end (see Corollary \ref{coro:EpointwiseIntegral} below). The logarithmic powers could be reduced with a little more work, if it were necessary. \begin{proof} Note that $F(z,s)$ satisfies the functional equation $\zeta^*(2s) F(z,s) = \zeta^*(2(1-s)) F(z,1-s)$, just as the Eisenstein series does. Furthermore, $\zeta^*(2s)F(z,s)$ is entire, and so $F(z,s)$ is analytic for $\text{Re}(s) \geq 1/2$. Suppose that $\text{Re}(s) \in [1/4, 3/4]$, and let \begin{equation} I = \frac{1}{2 \pi i} \int_{(2)} F(z, s+w)^2 \frac{\exp(w^2)}{w} dw. \end{equation} By \eqref{eq:Fbound}, we have that $I = O(y^{-1})$, uniformly in the stated range of $s$, and for $y \gg 1$. Suppose that $0 < \delta < 1/8$, and $s = 1/2 + \delta + iT$. Then by shifting the contour of integration to $\text{Re}(w) = - \delta$, we obtain \begin{equation} F(z, 1/2 + \delta + iT)^2 + \frac{1}{2 \pi i} \int_{(-\delta)} F(z, 1/2 + \delta + iT + w)^2 \frac{\exp(w^2)}{w} dw = I = O(y^{-1}). \end{equation} By bounding the integral trivially with absolute values, we have \begin{equation} \label{eq:Fshiftedbound} |F(z, 1/2 + \delta + iT)|^2 \ll y^{-1} + \int_{-\infty}^{\infty} \frac{\exp(-v^2)}{|-\delta+iv|} |F(z, 1/2 + iT + iv)|^2 dv. \end{equation} Alternatively, we have by Cauchy's theorem that \begin{equation} F(z, 1/2 + iT)^2 = \frac{1}{2 \pi i} \ointctrclockwise F(z, 1/2 + iT + u)^2 \frac{\exp(u^2)}{u} du, \end{equation} where the contour of integration is a small loop around $u=0$. We choose the contour to be a rectangle with corners $\pm \delta \pm 2i\log T$, where $\delta = \frac{c}{\log T}$ with $c > 0$ small enough to ensure that $\zeta(1 + 2iT + 2u) \gg (\log T)^{-1}$ inside the contour (by the standard zero-free region of $\zeta$). Using \eqref{eq:FboundCriticalStrip}, it follows that the top and bottom sides of this integral are bounded by \begin{equation} \label{eq:topandbottom} y^{-1} \exp(- \log^2 T), \end{equation} which additionally uses the fact that $y^{-1/2 - \text{Re}(u)} \asymp y^{-1/2}$ since $\log y \ll \log T$. For the side of the rectangle with $\text{Re}(u) = -\delta$, we change variables $u \rightarrow - u$ and apply the functional equation of $F$, which gives \begin{equation} F(z, 1/2 -\delta + iT - iv) = \frac{\zeta^*(1+2\delta - 2iT + 2iv)}{\zeta^*(1-2\delta + 2iT - 2iv)} F(z, 1/2 + \delta -iT + iv). \end{equation} Using standard bounds on the zeta function, we derive \begin{equation} \frac{\zeta^*(1+2\delta - 2iT + 2iv)}{\zeta^*(1-2\delta + 2iT - 2iv)} \ll \log^2 T. \end{equation} Therefore, we conclude \begin{equation} \label{eq:Fshiftedbound2} |F(z, 1/2 + iT)|^2 \ll y^{-1} \exp(- \log^2 T) + (\log T)^4 \int_{|v| \leq 2\log T} \frac{\exp(-v^2)}{|\delta + iv|} |F(z, 1/2 + \delta + iT + iv)|^2 dv. \end{equation} Inserting \eqref{eq:Fshiftedbound} into \eqref{eq:Fshiftedbound2}, we derive \begin{multline} |F(z, 1/2 + iT)|^2 \ll y^{-1} \exp(- \log^2 T) \\ + (\log T)^4 \int_{|v| \leq 2\log T} \frac{\exp(-v^2)}{|\delta + iv|} \Big[y^{-1} + \int_{-\infty}^{\infty} \frac{\exp(-r^2)}{|-\delta+ir|} |F(z, 1/2 + iT + iv+ ir)|^2 dr \Big] dv. \end{multline} The inner $r$-integral can be safely truncated at $|r| \leq 2\log T$ without introducing a new error term. Changing variables $r \rightarrow r- v$, and extending the $r$-integral to $|r| \leq 4 \log T$ by positivity, we derive \begin{multline} |F(z, 1/2 + iT)|^2 \ll y^{-1} \log^5 T \\ + (\log T)^4 \int_{|r| \leq 4 \log T} |F(z, 1/2 + iT + ir)|^2 \Big(\int_{|v| \leq 2\log T} \frac{\exp(-v^2)}{|\delta + iv|} \frac{\exp(-(r-v)^2)}{|-\delta+i(r-v)|} dv \Big) dr. \end{multline} Using Cauchy-Schwarz, and the obvious estimate \begin{equation} \int_{-\infty}^{\infty} \frac{1}{|\delta + iv|^2} dv = \frac{\pi}{\delta} \ll \log T, \end{equation} we may bound the inner $v$-integral by $O(\log T)$. Putting everything together gives the desired bound \eqref{eq:FpointwiseIntegral}. \end{proof} Using $E(z,s) = y^s + \varphi(s) y^{1-s} + F(z,s)$, and Cauchy-Schwarz, we derive a corresponding result for the Eisenstein series itself. \begin{mycoro} \label{coro:EpointwiseIntegral} Suppose $y, T \gg 1$. Then \begin{equation} \label{eq:EpointwiseIntegral} |E(z,1/2 + iT)|^2 \ll y \log^6 T + \log^5 T \int_{|r| \leq 4 \log T} |E(z, 1/2 + iT + ir)|^2 dr. \end{equation} \end{mycoro} \section{A lower bound for the amplifier} Let $w$ be a fixed, compactly-supported function on the positive reals, with $\int_{-\infty}^{\infty} w(t) dt \neq 0$. Define \begin{equation} A_N(t,r) = \sum_{n=1}^{\infty} w(n/N) \tau_{it}(n) \tau_{ir}(n). \end{equation} \begin{mylemma} \label{lemma:amplifier} Suppose that $\log N \gg (\log T)^{2/3 + \delta}$, and $t,r = T + O( (\log N)^{-1-\delta})$, for some fixed $\delta > 0$. Then \begin{equation} A_N(t,r)= \frac{\widetilde{w}(1) N \log N }{\zeta(2) |\zeta(1+2iT)|^2} (1 + o(1)). \end{equation} \end{mylemma} Fouvry, Kowalski, and Michel \cite[Lemma 2.4]{FKM} prove a result with a similar conclusion, but their method requires $N \gg T^{3}$, while here we eventually will want $N = T^{1/4}$. \begin{proof} Taking a Mellin transform, and using a well-known identity of Ramanujan \cite[(15)]{Ramanujan}, we derive \begin{equation} A_N(t,r) = \frac{1}{2 \pi i} \int_{(2)} N^s \widetilde{w}(s) \zeta^{-1}(2s) \zeta(s+it+ir) \zeta(s+it-ir) \zeta(s-it+ir) \zeta(s-it-ir) ds. \end{equation} Next we move the contour to the left, to one along the straight line segments $L_1, L_2, L_3$ defined by $L_1 = \{1 - \frac{c}{(\log T)^{2/3}} + it: |t| \leq 100 T \}$, $L_2 = \{ it : |t| \geq 100 T \}$, and the short horizontal segments $L_3 = \{ \sigma \pm 100 iT: 1-\frac{c}{(\log T)^{2/3}} \leq \sigma \leq 1 \}$. The integrals along the line segments $L_2$ and $L_3$ are trivially bounded by $O(T^{-100})$ by the rapid decay of $\widetilde{w}$. The new line $L_1$ gives an amount that is \begin{equation} \label{eq:amplifierErrorTerm} \ll N \log^{8/3} T \exp\Big(- c \frac{\log N}{(\log T)^{2/3}}\Big) \ll \frac{N}{(\log T)^{100}}, \end{equation} using the Vinogradov-Korobov bound $|\zeta(\sigma +i t)| \ll (\log |t|)^{2/3}$ for $t \gg 1$ and $1-\frac{c}{(\log |t|)^{2/3}} \leq \sigma \leq 1$ (see \cite[Corollary 8.28]{IK}). Here what is important is that $\sigma $ can be taken to be $1 - O((\log |t|)^{-1+\delta})$, for some small $\delta > 0$. We need to analyze the residues of the poles. Temporarily assume that $t \neq \pm r$. The poles at $s = 1 + it + ir$ and $s = 1-it -ir$ have very small residues from the rapid decay of $\widetilde{w}(s)$. The residue at $s= 1 +ir-it$ contributes \begin{equation} R_1 = \frac{N^{1+ir-it} \widetilde{w}(1+ir-it) }{\zeta(2(1+ir-it))} \zeta(1+2ir) \zeta(1+2ir-2it) \zeta(1-2it). \end{equation} By symmetry the residue at $s=1-ir + it$, say $R_2$, is the same as $R_1$ but with $r$ and $t$ interchanged. Let us write $r = t + \eta$ (by assumption, $\eta = O((\log N)^{-1-\delta}$), so \begin{equation} R_1 = \frac{N^{1+i\eta} \widetilde{w}(1+i\eta) }{\zeta(2(1+i\eta))} \zeta(1+2i\eta) \zeta(1+2it + 2i \eta) \zeta(1-2it). \end{equation} By simple Taylor approximations, we have \begin{equation} \label{eq:R1simplepart} \frac{N^{1+i\eta} \widetilde{w}(1+i\eta) }{\zeta(2(1+i\eta))} \zeta(1+2i\eta) =\frac{\widetilde{w}(1) N \log N }{2 \zeta(2)} \Big( \frac{1}{i \eta \log N} + 1 + O(|\eta| \log N) \Big), \end{equation} and by the Vinogradov-Korobov bound $\frac{\zeta'}{\zeta}(1 + 2it) \ll (\log |t|)^{2/3 + \varepsilon}$ (see \cite[Theorem 8.29]{IK}), we have \begin{equation} \label{eq:R1zetapart} \zeta(1+2it + 2i \eta) \zeta(1-2it) = |\zeta(1+2it)|^2 (1 + O(|\eta| (\log T)^{2/3 + \varepsilon})). \end{equation} Combining \eqref{eq:R1simplepart} and \eqref{eq:R1zetapart}, we derive \begin{equation} \label{eq:R1estimate} R_1 = \frac{N\widetilde{w}(1) |\zeta(1+2it)|^2 \log N}{2\zeta(2)} \Big[ 1+ \frac{1}{i \eta \log N} + O\Big(|\eta| \log N + \frac{(\log T)^{2/3+\varepsilon}}{\log N} + |\eta| (\log T)^{2/3+\varepsilon}\Big) \Big]. \end{equation} The conditions $\eta \ll (\log N)^{-1-\delta}$, $(\log N) \gg (\log T)^{2/3+\delta}$ are enough to imply that the error term in \eqref{eq:R1estimate} is $o(1)$. Similarly, \begin{equation} R_2 = \frac{N\widetilde{w}(1) |\zeta(1+2it)|^2 \log N}{2\zeta(2)} \Big[1 + \frac{1}{-i \eta \log N} + o(1) \Big], \end{equation} and therefore, \begin{equation} R_1 + R_2 = \frac{N\widetilde{w}(1) |\zeta(1+2it)|^2 \log N }{\zeta(2)} (1 + o(1) ). \qedhere \end{equation} {\bf Remark.} If we had moved the contour to the line $\sigma = 1/2$, then instead of \eqref{eq:amplifierErrorTerm} we would have obtained an error term of size $O(N^{1/2} T^{1/3+\varepsilon})$ using Weyl's subconvexity bound. This is $o(N)$ for $N \gg T^{2/3+\varepsilon}$, which is far from our desired choice of $N = T^{1/4}$. \end{proof} \section{Completion of the proof} By Corollary \ref{coro:EpointwiseIntegral}, we have that \begin{equation} |E(z, 1/2 + iT)|^2 \ll y \log^6 T + \log^5 T \int_{|r| \leq 4 \log T} |E(z, 1/2 + iT + ir)|^2 dr. \end{equation} On the right hand side above, we dissect the integral into subintervals, each of length $\asymp (\log T)^{-2}$, say. Let $U$ be one of these intervals, and choose a point $t_U \in U$. Then by Lemma \ref{lemma:amplifier}, we have \begin{equation} \int_{r \in U} |E(z, 1/2 + iT + ir)|^2 dr \ll N^{-2} T^{\varepsilon} \int_{r \in U} |A_N(T+r, T+t_U)|^2 |E(z, 1/2 + iT+ir)|^2 dr. \end{equation} By \eqref{eq:IwaniecSarnakBound}, with $\alpha_n = w(n/N) \tau_{i(T+t_U)}(n)$, this is in turn \begin{equation} \ll (NT)^{\varepsilon} \Big(\frac{T}{N} + T^{1/2} (N + N^{1/2} y) \Big). \end{equation} If $1 \ll y \ll T^{1/8}$, we choose $N$ as $T^{1/4}$, which in all gives \begin{equation} |E(z, 1/2 + iT)| \ll T^{3/8 + \varepsilon}. \end{equation} If $T^{1/8} \ll y \ll T^{1/6}$, we set $N = y^{-2/3} T^{1/3}$, giving \begin{equation} \label{eq:mediumT} |E(z, 1/2 + iT)| \ll y^{1/3} T^{1/3 + \varepsilon}. \end{equation} The bound \eqref{eq:FourierExpansionBound2} is superior to \eqref{eq:mediumT} for $y \gg T^{1/6}$.
{ "timestamp": "2016-01-26T02:19:28", "yymm": "1504", "arxiv_id": "1504.03272", "language": "en", "url": "https://arxiv.org/abs/1504.03272" }
\section{Introduction} We initiated the ultimate intelligence research program in 2014 inspired by Seth Lloyd's similarly titled article on the ultimate physical limits to computation \cite{Lloyd:Ultimate}, intended as a book-length treatment of the theory of general-purpose AI. In similar spirit to Lloyd's research, we investigate the ultimate physical limits and conditions of intelligence. A main motivation is to extend the theory of intelligence using physical units, emphasizing the physicalism inherent in computer science. This is the second installation of the paper series, the first part \cite{ozkural-agi15} proposed that universal induction theory is physically complete arguing that the algorithmic entropy of a physical stochastic source is always finite, and argued that if we choose the laws of physics as the reference machine, the loophole in algorithmic information theory (AIT) of choosing a reference machine is closed. We also introduced several new physically meaningful complexity measures adequate for reasoning about intelligent machinery using the concepts of minimum volume, energy and action, which are applicable to both classical and quantum computers. Probably the most important of the new measures was the minimum energy required to physically transmit a message. The minimum energy complexity also naturally leads to an energy prior, complementing the speed prior \cite{Schmidhuber2002} which inspired our work on incorporating physical resource limits to inductive inference theory. In this part, we generalize logical depth and conceptual jump size to stochastic sources and consider the influence of volume, space and energy. We consider the energy efficiency of computing as an important parameter for an intelligent system, forgoing other details of a universal induction approximation. We thus relate the ultimate limits of intelligence to physical limits of computation. \section{Notation and Background} Let us recall Solomonoff's universal distribution \cite{alp1}. Let $U$ be a universal computer which runs programs with a prefix-free encoding like LISP; $y=U(x)$ denotes that the output of program $x$ on $U$ is $y$ where $x$ and $y$ are bit strings. \footnote{A prefix-free code is a set of codes in which no code is a prefix of another. A computer file uses a prefix-free code, ending with an EOF symbol, thus, most reasonable programming languages are prefix-free. } Any unspecified variable or function is assumed to be represented as a bit string. $|x|$ denotes the length of a bit-string $x$. $f(\cdot)$ refers to function $f$ rather than its application. The algorithmic probability that a bit string $x \in \{0,1\}^+$ is generated by a random program $\pi \in \{0,1\}^+$ of $U$ is: \begin{equation} \label{eq:alp} P_U(x) = \sum_{U(\pi) \in x(0+1)^* \wedge \pi \in \{0,1\}^+} 2^{-|\pi|} \end{equation} which conforms to Kolmogorov's axioms \cite{levin-thesis}. $P_U(x)$ considers any continuation of $x$, taking into account non-terminating programs.\footnote{We used the regular expression notation in language theory.} $P_U$ is also called the universal prior for it may be used as the prior in Bayesian inference, for any data can be encoded as a bit string. We also give the basic definition of Algorithmic Information Theory (AIT), where the algorithmic entropy, or complexity of a bit string $x \in \{0,1\}^+$ is \begin{equation} \label{eq:algo-entropy} H_U(x) = \min( \{ |\pi| \ | \ U(\pi)=x \} ) \end{equation} We shall now briefly recall the well-known Solomonoff induction method \cite{alp1,alp2}. Universal sequence induction method of Solomonoff works on bit strings $x$ drawn from a stochastic source $\mu$. \prettyref{eq:alp} is a semi-measure, but that is easily overcome as we can normalize it. We merely normalize sequence probabilities \begin{alignat}{3} \label{eq:normalization} P'_U(x0)=\frac{P_U(x0).P'_U(x)}{P_U(x0)+P_U(x1)} && \quad P'_U(x1)=\frac{P_U(x1).P_U'(x)}{P_U(x0)+P_U(x1)} \end{alignat} eliminating irrelevant programs and ensuring that the probabilities sum to $1$, from which point on $P'_U(x0|x) = P'_U(x0)/P'_U(x)$ yields an accurate prediction. The error bound for this method is the best known for any such induction method. The total expected squared error between $P'_U(x)$ and $\mu$ is \begin{equation} \label{eq:convergence} E_P \left[ \sum_{m=1}^n{(P'_U({a_{m+1}=1}|a_1a_2...a_m) - \mu({a_{m+1}=1}|a_1a_2...a_m))^2} \right] \leq - \frac{1}{2} \ln{P_U(\mu)} \end{equation} which is less than $-1/2\ln{P'_U}(\mu)$ according to the convergence theorem proven in \cite{solcomplexity}, and it is roughly $H_U(\mu)\ln2$ \cite{solomonoff-threekinds}. Naturally, this method can only work if the algorithmic complexity of the stochastic source $H_U(\mu)$ is finite, i.e., the source has a computable probability distribution. The convergence theorem is quite significant, because it shows that Solomonoff induction has the best generalization performance among all prediction methods. In particular, the total error is expected to be a constant independent of the input, and the error rate will thus rapidly decrease with increasing input size. Operator induction is a general form of supervised machine learning where we learn a stochastic map from question and answer pairs $q_i, a_i$ sampled from a (computable) stochastic source $\mu$. Operator induction can be solved by finding in available time a set of operator models $O^j(\cdot|\cdot)$ such that the following goodness of fit is maximized \begin{equation} \label{eq:opind-gof} \Psi = \sum_j{\psi^j_n} \end{equation} for a stochastic source $\mu$ where each term in the summation is \begin{equation} \label{eq:opind-gof-term} \psi^j_n= 2^{-|O^j(\cdot|\cdot)|}\prod_{i=1}^n{O^j(a_i|q_i)}. \end{equation} $q_i$ and $a_i$ are question/answer pairs in the input dataset, and $O^j$ is a computable conditional pdf (cpdf) in \prettyref{eq:opind-gof-term}. We can use the found operators to predict unseen data \cite{solomonoff-threekinds} \begin{equation} \label{eq:opind-pred} P_U(a_{n+1}|q_{n+1}) = \sum_{j=1}^n\psi^j_nO^j(a_{n+1}|q_{n+1}) \end{equation} The goodness of fit in this case strikes a balance between high a priori probability and reproduction of data like in minimum message length (MML) method, yet uses a universal mixture like in sequence induction. The convergence theorem for operator induction was proven in \cite{solomonoff-progress} using Hutter's extension to arbitrary alphabet. Operator induction infers a generalized conditional probability density function (cpdf), and Solomonoff argues that it can be used to teach a computer anything. For instance, we can train the question/answer system with physics questions and answers, and the system would then be able to answer a new physics question, dependent upon how much has been taught in the examples; a future user could ask the system to describe a physics theory that unifies quantum mechanics and general relativity, given the solutions of every mathematics and physics problem ever solved in literature. Solomonoff's original training sequence plan proposed to instruct the system first with an English subset and basic algebra, and then venture into more complex subjects. The generality of operator induction is partly due to the fact that it can be used to learn any kind of association, i.e., it models an ideal content-addressable memory, but it also generalizes any kind of law therein implicitly, that is why it can learn an implicit principle (such as of syntax) from linguistic input, enabling the system to acquire language; it can also model complex translation problems, and all manners of problems that require additional reasoning (computation). In other words, it is a universal problem solver model. It is also the most general of the three kinds of induction, which are sequence, set, and operator induction, and the closest to machine learning literature. The popular applications of speech and image recognition are covered by operator induction model, as is the wealth of pattern recognition applications, such as describing a scene in English. We think that, therefore, operator induction is an AI-complete problem -- as hard as solving the human-level AI problem in general. It is with this in mind that we analyze the asymptotic behavior of an optimal solution to operator induction problem. \section{Physical Limits to Universal Induction} In this section, we elucidate the physical resource limits in the context of a hypothetical optimal solution to operator induction. We first extend Bennett's logical depth and conceptual jump size to the case of operator induction, and show a new relation between expected simulation time of the universal mixture and conceptual jump size. We then introduce a new graphical model of computational complexity which we use to derive the relations among physical resource bounds. We introduce a new definition of physical computation which we call self-contained computation, which is a physical counterpart to self-delimiting program. The discovery of these basic bounds, and relations, exact, and asymptotic, give meaning to the complexity definitions of Part I. Please note that Schmidhuber disagrees with the model of the stochastic source as a computable pdf \cite{Schmidhuber2002}, but Part I contained a strong argument that this was indeed the case. A stochastic source cannot have a pdf that is computable only in the limit, if that were the case, it could have a random pdf, which would have infinite algorithmic information content, and that is clearly contradicted by the main conclusion of Part I. A stochastic source cannot be semi-computable, because it would eventually run out of energy and hence the ability to generate further quantum entropy, especially the self-contained computations of this section. That is the reason we had introduced self-contained computation notion at any rate. Note also that Schmidhuber agrees that quantum entropy does not accumulate to make the world incompressible in general, therefore we consider his proposal that we should view a cpdf as computable in the limit as too weak an assumption. As with Part I, the analysis of this section is extensible to quantum computers, which is beyond the scope of the present article. \subsection{Logical depth and conceptual jump size} Conceptual Jump Size (CJS) is the time required by an incremental inductive inference system to learn a new concept, and it increases exponentially in proportion to the algorithmic information content of the concept to be learned relative to the concepts already known \cite{solomonoff-incremental}. The physical limits to OOPS based on Conceptual Jump Size were examined in \cite{oops}. Here, we give a more detailed treatment. Let $\pi^*$ be the computable cpdf that exactly simulates $\mu$ with respect to $U$, for operator induction. \begin{equation} \label{eq:minimal} \pi^* = \argmin_{\pi_j}(\{ |\pi_j| \ | \ \forall x,y \in \{0,1\}^*: U(\pi_j,x,y)=\mu(x | y) \}) \end{equation} The conceptual jump size of inductive inference ($\mathrm{CJS}} \newcommand{\CJV}{\mathrm{CJV}$) can be defined with respect to the optimal solution program using Levin search \cite{sol-perfect}: \begin{equation} \label{eq:cjs} \mathrm{CJS}} \newcommand{\CJV}{\mathrm{CJV}(\mu) = \frac{t(\pi^*)} { P(\pi^*)} \leq 2.\mathrm{CJS}} \newcommand{\CJV}{\mathrm{CJV}(\mu) \end{equation} where $t(\cdot)$ is the running time of a program on $U$. \begin{align} \label{eq:time} H_U(\pi^*) &= -\log_2{P_U(\pi^*)} = -\log_2{P_U(\mu)}\\ \label{eq:time2} t(\mu) &\leq t(\pi^*) 2^{H_U(\mu)+1} \end{align} where $t(\mu)$ is the time for solving an induction problem from source $\mu$ with sufficient input complexity ($>> H_U(\mu)$), we observe that the asymptotic complexity is \begin{align} \label{eq:time3} t(\mu) = O(2^{H_U(\mu)}) \end{align} for fixed $t(\pi^*)$. Note that $t(\pi^*)$ corresponds to the \emph{stochastic} extension of Bennett's logical depth \cite{bennett88logical}, which was defined as: ``the running time of the minimal program that computes $x$''. Let us recall that the minimal program is essentially unique, a polytope in program space \cite{chaitin-ait}. \begin{definition} Stochastic logical depth is the running time of the minimal program that accurately simulates a stochastic source $\mu$. \begin{equation} \label{eq:logicaldepth} L_U(\mu) = t(\pi^*) \end{equation} \end{definition} which, with \prettyref{eq:time2}, entails our first bound. \begin{lemma} \begin{align} \label{eq:time3} t(\mu) &\leq L_U(\mu). 2^{H_U(\mu)+1} \end{align} \end{lemma} \begin{lemma} $\mathrm{CJS}} \newcommand{\CJV}{\mathrm{CJV}$ is related to the \emph{expectation} of the simulation time of the universal mixture. \begin{equation} \label{eq:logicaldepth} CJS(\mu) \leq \sum_{U(\pi) \in x(0+1)^*} t(\pi).2^{-|\pi|} = E_{P_U}[\{ t(\pi)\ | \ U(\pi) \in x(0+1)^* \}] \end{equation} where $x$ is the input data to sequence induction, without loss of generality. \end{lemma} \begin{proof} Rewrite as $ t(\pi^*) 2^{|-\pi^*|} \leq \sum_{U(\pi) \in x(0+1)^*} t(\pi).2^{-|\pi|} $. Observe that left-hand side of the inequality is merely a term in the summation in the right. \end{proof} \subsection{A Graphical Analysis of Intelligent Computation} Let us introduce a graphical model of computational complexity that will help us visualize physical complexity relations that will be investigated. We do not model the computation itself, we just enumerate the physical resources required. Present treatment is merely classical computation over sequential circuits. \begin{definition} \label{def:lattice} Let the computation be represented by a directed bi-partite graph $G=(V,E)$ where vertices are partitioned into $V_O$ and $V_M$ which correspond to primitive operations and memory cells respectively, $V=V_O \cup V_M, V_O \cap V_M = \emptyset$. Function $ t: V \cup E \rightarrow \field{Z}} \newcommand{\R}{\field{R}$ assigns time to vertices and edges. \footnote{Time as discrete timestamps, as opposed to duration.} Edges correspond to causal dependencies. $I \subset V$ and $O \subset V$ correspond to input and output vertices interacting with the rest of the world. We denote acccess to vertex subsets with functions over $G$, e.g., $I(G)$. \end{definition} \prettyref{def:lattice} is a low-level computational complexity model where the physical resources consumed by any operation, memory cell, and edge are the same for the sake of simplicity. Let $v_u$ be the unit space-time volume, $e_u$ be the unit energy, and $s_u$ be the unit space. \begin{definition} \label{def:volume} Let the volume of computation be defined as $V_U(\pi)$ which measures the space-time volume of computation of $\pi$ on $U$ in physical units, i.e., $m^3.sec$. \end{definition} For \prettyref{def:lattice}, it is $(|V(G)|+|E(G)|).v_u$. Volume of computation measures the extent of the space-time region occupied by the dynamical evolution of the computation of $\pi$ on $U$. We do not consider the theory of relativity. For instance, the space of a Turing Machine is the Instantaneous Description (ID) of it, and its time corresponds to $Z^+$. A Turing Machine derivation that has an ID of length $i$ at time $i$ and takes $t$ steps to complete would have a volume of $t.(t+1)/2$.\footnote{If the derivation is $A \rightarrow AA \rightarrow AAA$, it has $1+2+3 = 6$ volume.} \begin{definition} \label{def:energy} Let the energy of computation be defined as $E_U(\pi)$ which measures the total energy required by computation of $\pi$ on $U$ in physical units, e.g, $J$. \end{definition} For \prettyref{def:lattice}, it is $E_U(\pi) = (|V(G)|+|E(G)|).e_u$. \begin{definition} \label{eq:space} Let the space of computation be defined as $S_U(\pi)$ which measures the maximum volume of a synchronous slice of the space-time of computation $\pi$ on $U$ in physical units, e.g., $m^3$. \end{definition} For \prettyref{def:lattice}, it is \begin{equation} \max_{i \in \field{Z}} \newcommand{\R}{\field{R}}\{| \{x \in \{V(G) \cup E(G)\} | \ t(x)=i \} |\}.s_u \end{equation} \begin{definition} \label{eq:self-contained} In a self-contained physical computation all the physical resources required by computation should be contained within the volume of computation. \end{definition} Therefore, we do not allow a self-contained physical computation to send queries over the internet, or use a power cord, for instance. Using these new more general concepts, we measure the conceptual jump size in space-time volume rather than time (space-time extent might be a more accurate term). Algorithmic complexity remains the same, as the length of a program readily generalizes to space-time volume of program at the input boundary of computation, which would be $V_0(G) \triangleq |I(G) \cap V_M(G)|.v_u$ for \prettyref{def:lattice}. If $y=U(x)$, bitstring $x$ and y correspond to $I(G)$, and $O(G)$ respectively. A program $\pi$ corresponds to a vertex set $V_\pi \subseteq I(G)$ usually, and its size is denoted as $V_0(\pi)$. We use bitstrings for data and programs below, but measure their sizes in physical units using this notation. It is possible to eliminate bit strings altogether using a volume prior, we mix notations only for ease of understanding. Let us generalize logical depth to the logical volume of a bit string $x$: \begin{equation} \label{eq:logicalvol1} L^V_U(x) \triangleq V_U( \argmin_{\pi} \{ V_0(\pi) \ | \ U(\pi) \in x(0+1)^* \} ) \end{equation} Let us also generalize stochastic logical depth to stochastic logical volume: \begin{equation} \label{eq:logicalvol2} L^V_U(\mu) \triangleq V_U(\pi^*) \end{equation} which entails that Conceptual Jump Volume (CJV), and logical volume $V_U$ of a stochastic source may be defined analogously to CJS \begin{equation} \label{eq:time3} \CJV(\mu) \triangleq L^V_U(\mu). 2^{H_U(\mu)} \leq V_U(\mu) \leq 2.\CJV(\mu) \end{equation} where left-hand side corresponds to space-time extent variant of $\mathrm{CJS}} \newcommand{\CJV}{\mathrm{CJV}$. Likewise, we define logical energy for a bit string, and stochastic logical energy: \begin{align} \label{eq:logicaleng1} L^E_U(x) &\triangleq E_U( \argmin_{\pi} \{ V_0(\pi) \ | \ U(\pi) \in x(0+1)^* \} ) & L^E_U(\mu) &\triangleq E_U(\pi^*) \end{align} Which brings us to an energy based statement of conceptual jump size, that we term conceptual jump energy, or conceptual gap energy: \begin{lemma} $\mathrm{CJE}} \newcommand{\CJTE}{\mathrm{CJTE}(\mu) \triangleq E_U(\pi^*).2^{H_U(\mu)} \leq E_U(\mu) \leq 2.CJE(\mu)$. \end{lemma} The inequality holds since we can use $E_U(\cdot)$ bounds in universal search instead of time. We now show an interesting relation which is the case for self-contained computations. \begin{lemma} If all basic operations and basic communications spend constant energy for a fixed space-time extent (volume), then: \begin{align*} E_U(\pi^*) &= O(V_U(\pi^*)) & E_U(\mu) &= O(L^V_U(\mu)) . \end{align*} \end{lemma} One must spend energy to conserve a memory state, or to perform a basic operation (in a classical computer). We may assume the constant complexity of primitive operations, which holds in \prettyref{def:lattice}. Let us also assume that the space complexity of a program is proportional to how much mass is required. Then, the energy from the resting mass of an optimal computation may be taken into account, which we call total energy complexity (in metric units): \begin{lemma} \label{lem:toteng} \begin{align*} E_t(\pi^*) &= d_eV_U(\pi^*) + S_U(\pi^*)d_mc^2 \\ E_t(\mu) &= d_eL^V_U(\mu) + S_U(\mu)d_mc^2 = O(L^V_U(\mu) + S_U(\mu)) \end{align*} \end{lemma} where $c$ is the speed of light, energy density $d_e = e_u/v_u$, and mass density $d_m = m_u/s_u$ for the graphical model of complexity. \begin{lemma} Conceptual jump total energy (CJTE) of a stochastic source is: \begin{equation} \CJTE(\mu) \triangleq E_t(\pi^*).2^{H_U(\mu)} \leq E_t(\mu) \leq 2.CJTE(\mu) . \end{equation} \end{lemma} As a straightforward consequence of the above lemmas, we show a lower bound on the energy required, that is related to the volume, and space linearly, and algorithmic complexity of a stochastic source exponentially, for optimal induction. \begin{theorem} $\CJTE(\mu) = \left( d_eL^V_U(\mu) + S_U(\mu)d_mc^2 \right) 2^{H_U(\mu)} \leq E_t(\mu) \leq 2.CJTE(\mu)$ \end{theorem} \begin{proof} We assume that the energy density is constant; we can use $E_t(\cdot)$ for resource bounds in Levin search. The inequality is obtained by substituting \prettyref{lem:toteng} into the definitional inequality. \end{proof} The last inequality gives bounds for the total energy cost of inferring a source $\mu$ in relation to space-time extent (volume of computation), space complexity, and an exponent of algorithmic complexity of $\mu$. This inspires us to define priors using $\CJV$, $\mathrm{CJE}} \newcommand{\CJTE}{\mathrm{CJTE}$, and $\CJTE$ which would extend Levin's ideas about resource bounded Kolmogorov complexity, such as $K_t$ complexity. In the first installation of ultimate intelligence series, we had introduced complexity measures and priors based on energy and action. We now define the one that corresponds to CJE and leave the rest as future work due to lack of space. \begin{definition} Energy-bounded algorithmic entropy of a bit string is defined as: \begin{equation} H_e(x) \triangleq \min\{|\pi| + \log_2 E_U (\pi) \ | \ U(\pi) = x\} \end{equation} \end{definition} \subsection{Physical limits, incremental learning, and digital physics} Landauer's limit is a thermodynamic lower bound of $kTln2$ J for erasing $1$ bit where $k$ is the Boltzmann constant and $T$ is the temperature \cite{Landauer:1961}. The total number of bit-wise operations that a quantum computer can evolve is $2E/h$ operations where $E$ is average energy, and thus the physical limit to energy efficiency of computation is about $3.32 \times 10^{33} $ operations/J \cite{levitin-limit}. Note that the Margolus-Levitin limit may be considered a quantum analogue of our relation of the volume of computation with total energy, which is called $E.t$ ``action volume'' in their paper, as it depends on the quantum of action $h$ which has $E.t$ units. Bremermann discusses the minimum energy requirements of computation and communication in \cite{Bremermann1982}. Lloyd \cite{Lloyd:Ultimate} assumes that all the mass may be converted to energy and calculates the maximum computation capacity of a 1 kilogram ``black-hole computer'', performing $10^{51}$ operations over $10^{31}$ bits. According to an earlier paper of his, the whole universe may not have performed more than $10^{120}$ operations over $10^{90}$ bits \cite{Lloyd:Universal}. \begin{corollary} $H(\mu) \leq 397.6 $ for any $\mu$ where the logical volume is $1$. \end{corollary} \begin{proof} $V(\mu) \leq L^V_U(\mu). 2^{H_U(\mu)+1} \leq 10^{120}$. Assume that $L^V_U(\mu)=1$. \footnote{Although the assumption that it takes only 1 unit of space-time volume to simulate the minimal program that reproduces the pdf $\mu$ is not realistic, we are only considering this for the sake of simplicity, and because 1 $m^3$ is close to the volume of a personal computer, or a brain. For many pdfs, it could be much larger in practice.} $\log_2( 2^{H_U(\mu)+1} ) \leq 3.321 \times 120$. $H(\mu)+1 \leq 398.6$ \end{proof} Therefore, if $\mu$ has a greater algorithmic complexity than about $400$ bits, it would have been unguaranteed to discover it without any a priori information. Digital physics theories suggest that the physical law could be much simpler than that however, as there are very simple universal computers in the literature \cite{Miller:2005}, a survey of which may be found in \cite{neary-smalluniversal}, which means interestingly that the universe may have had enough time to discover its basic law. This limit shows the remarkable importance of incremental learning as both Solomonoff \cite{solomonoff-agi10} and Schmidhuber \cite{oops} have emphasized, which is part of ongoing research. We proposed previously that incremental learning is an AI axiom \cite{ozkural-diverse}. Optimizing energy efficiency of computation would also be an obviously useful goal for a self-improving AI. This measure was first formalized by Solomonoff in \cite{solomonoff-progress}, which he imagined would be optimizing performance in units of bits/sec.J as applied to inductive inference, which we agree with, and will eventually implement in our Alpha Phase 2 machine; Alpha Phase 1 has already been partially implemented in our parallel incremental inductive inference system \cite{teramachine-agi11}. \section*{Acknowledgements} Thanks to anonymous reviewers whose comments substantially improved the presentation. Thanks to Gregory Chaitin and Juergen Schmidhuber for inspiring the mathematical philosophy / digital physics angle in the paper. I am forever indebted for the high-quality research coming out of IDSIA which revitalized interest in human-level AI research. \bibliographystyle{splncs03}
{ "timestamp": "2016-05-12T02:08:28", "yymm": "1504", "arxiv_id": "1504.03303", "language": "en", "url": "https://arxiv.org/abs/1504.03303" }
\section{Ensemble-based memory implementation} Memory experiments were driven at a repetition rate of 66 Hz, each cycle including a stage dedicated to MOT preparation and a period for memory operations (Fig. \ref{setup}c). The MOT preparation started with 11.5 ms of MOT loading followed by further cooling by optical molasses during 650 $\mu$s while the MOT magnetic field gradient was switched off. The optical depth decays then in a typical time constant of 2 ms and the memory efficiency stays thus almost constant over the memory period. Memory sequences were repeated 50 or 100 times during the memory operation part of the cycle (depending on the storage time), for a total number of acquisition of $150000$ for each projection. Temporal shaping of the pulses to be stored was obtained by applying a radio-frequency exponentially rising voltage on an acousto-optic modulator and photons were finally detected by a single avalanche photodiode (SPCM-AQR-14FC). In order to avoid inhomogeneous broadening, three pairs of coils were used to compensate any residual magnetic fields, down to 5 mG. The retrieval efficiency decays with storage time due to the decoherence of the collective atomic spin. Motional dephasing was the principal decoherence here. Due to the $3 ^{\circ}$ angle between the signal and the control beams, the expected coherence time is around 7 $\mu$s, which is consistent with the experimental measurement. \section{Assessing the quantum character of the memory} In order to assess the quantum character of the demonstrated memory, the measured fidelities have to be compared with the maximum fidelities achievable in a classical memory protocol, known for instance as the intercept-resend attack in quantum cryptography scenario. In the case of a N-photon state the maximal classical fidelity is given by $\alpha=(N+1)/(N+2)$, which leads to the well-known 2/3 limit for a single-photon state. In the case of a coherent beam, as used here, the N-photon value $\alpha$ has to be averaged by taking into account the photon-number Poissonian distribution and the achievable fidelity can then be written as: \begin{equation} \sum_{N\geq 1}\frac{(N+1)}{(N+2)}\frac{P(\overline{n},N)}{1-P(\overline{n},0)} \end{equation} where $P(\overline{n},N)=e^{-\overline{n}} \overline{n}^{N}/N! $. The non-unity retrieval efficiency has also to be taken into account. A classical memory in an intercept-resend strategy could indeed simulate non-unity efficiencies to increase the achievable fidelity by giving an output only when the entering photon-number is above a certain threshold and inducing losses otherwise. Explicit expression for given mean-photon number and efficiency of the process are detailed in Refs. \cite{Specht11,Gundogan12}. The maximal classical fidelities have been reported in Fig. \ref{tom}b and \ref{fid}b where the blue and orange solid lines, which take into account the Poissonian statistics and non-unity efficiency respectively, are extrapolated by calculating the best classical fidelity with the measured values of mean-photon number and retrieval efficiency for every data set. \section*{Acknowledgements} \noindent The authors thank A. Nicolas, D. Maxein, E. Giacobino, L. Giner and L. Veissier for their contributions in the early stage of the experiment. The authors also acknowledge interesting discussions within the CAPES-COFECUB project Ph 740-12. This work was supported by the ERA-Net CHIST-ERA (QScale) and by the European Research Council (ERC Starting Grant HybridNet, grant agreement no. 307450 and ERC Starting Grant 3D-QUEST, grant agreement no. 307783, http://www.3dquest.eu.). J.Laurat is a member of the Institut Universitaire de France.\\
{ "timestamp": "2015-07-20T02:01:14", "yymm": "1504", "arxiv_id": "1504.03096", "language": "en", "url": "https://arxiv.org/abs/1504.03096" }
\section{Introduction}\label{sec:intro} Since the discovery of a bow shock in the periphery of the bullet cluster 1E 0657-558 \citep{markevitch2002}, it has been well established that shock waves exist in and around galaxy clusters. Using cosmological hydrodynamic simulations for the large-scale structure (LSS) formation of the Universe, the origin and nature of shock waves in the intracluster medium (ICM) as well as in the intergalactic medium (IGM) have been extensively studied \citep{miniati2000, ryu2003, pfrommer2006, kang2007, skillman2008, hoeft2008, vazza2009}. These studies demonstrated that abundant shocks are produced by supersonic flow motions during the process of hierarchical clustering of nonlinear structures, and that they could be classified into two categories, according to their locations relative to the host structures. \emph{External shocks} are formed at the outermost surfaces surrounding clusters, filaments, and sheets of galaxies by the accretion of cool ($T \sim 10^3-10^4$ K), tenuous gas in voids onto those nonlinear structures. Since the accretion velocity around clusters can be as high as $v_{\rm acc} \sim$ a few $\times 10^3 \kms$ and the sound speed of accreting gas is $c_{s}\sim 5-15 \kms$, external shocks are strong with Mach number as large as $M_s \sim 10^3$. On the other hand, \emph{internal shocks} are produced inside the nonlinear structures, where the gas has been already heated to high temperature by previous episodes of shock passage, so their Mach number is typically low with $M_s \la 10$. While most of internal shocks have $M_s \la 3$, those with $2 \la M_s \la 4$ play the most important role in dissipating the shock kinetic energy into heat in the ICM \citep[e.g.,][]{ryu2003,kang2007}. Internal shocks could be further classified by their origins into a number of types, which in fact may not be mutually exclusive. \emph{Turbulent shocks} are induced by turbulent flow motions in the ICM and they are expected to have very small Mach numbers, $M_s \la 2$ \citep{pjr15}. Turbulent motions in the ICM could be generated by several different processes: major or minor mergers, AGN jets, galactic winds, galaxy wakes, and etc \citep[e.g.,][]{subramanian2006,ryu2008,ryu2012,vazza2012b,brunetti2014}. Bow shocks found in the outskirts of merging clusters are commonly referred as \emph{merger shocks} \citep{roettiger1999,markevitch2007,skillman2013}. In most cases merger shocks are weak with $M_s \la 3$ \citep[e.g.,][]{gabici2003}. \emph{Infall shocks} form by infall of the warm-hot intergalactic medium (WHIM; $T \sim 10^5-10^7$ K) into the hot ICM along adjacent filaments \citep{hong2014}. They could have relatively high Mach numbers, reaching up to $M_s \sim 10$, and thus the ensuing cosmic-ray (CR) acceleration can be more efficient than in other types of shocks. Including the shock in the bullet cluster, a number of shocks have been found in X-ray observations \citep[e.g.,][]{russel2010, akamatsu2012, ogrean2013a}. {In these observations, shocks are detected as sharp discontinuities in the temperature and surface brightness distributions,} and their physical properties including the sonic Mach number, $M_s$, are estimated from the `deprojected' temperature and density jumps \citep{markevitch2007}. Most of the shocks detected in X-ray observations are weak with $M_s \sim 1.5-3$. In addition, shocks in the ICM have been detected by radio observations, especially as the so-called radio relics \citep[see, e.g.,][for reviews]{feretti2012,bruggen2012}. Radio relics are usually found in the cluster outskirts around the virial radius, $r_{\rm vir}$. The radio emission is understood as synchrotron radiation from CR electrons with the Lorentz factor of $\gamma_e \sim 10^3-10^5$ that are believed to be accelerated at the shocks associated with them \citep[e.g.][]{ensslin1998, bagchi06, vanweeren2010}. High energy nonthermal particles can be produced via diffusive shock acceleration (DSA) at astrophysical shocks, such as interplanetary and supernova remnant shocks as well as cluster shocks in collisionless tenuous plasma \citep{bell1978, blandford1978, drury1983}. Moreover, it has been shown that turbulence flow motions in the ICM can produce magnetic fields of up to $\sim \microGauss$ level \citep[e.g.,][]{ryu2008}. {Since the radio-emitting electrons with the Lorentz factor of $\gamma_e \sim 10^3-10^5$} would not advect nor diffuse in the ICM more than $\sim 100$ kpc away from the shock surface before they lose the energy due to radiative cooling via synchrotron emission and inverse Compton (IC) scattering \citep[e.g.,][]{kang2011}, it is commonly thought that their acceleration sites are likely to be close to where the synchrotron emission is seen. So the physical properties of shocks in radio relics are inferred from observed quantities such as the injection spectral index at the shock edge, $\alpha_{\rm inj} = ( M_s^2+3)/2(M_s^2-1)$, and the spatial profile of surface brightness \citep{drury1983, ensslin1998,kang2012}. Note that the spectral index for the synchrotron spectrum {\it integrated} over the downstream behind a steady planar shock is expected to be $\alpha \approx \alpha_{\rm inj}+0.5$ \citep[e.g.][]{kang15b,kangryu2015}. In most cases, the Mach number of shocks in radio relics was found to be in the range of $M_s \sim 1.5-4.5$ \citep[e.g.,][]{clarke2006, bonafede2009, vanweeren2010, vanweeren2012,stroe14b}. Pre-acceleration of thermal electrons to suprathermal energies and subsequent injection into the Fermi first-order process at shock have been one of the outstanding problems in the DSA theory. It is thought that the particle injection and DSA acceleration might be very inefficient at weak shocks ($M_s\la 3$) because of small density compression across shocks \citep[e.g.,][]{maldru01}. Especially, in the so-called ``thermal leakage'' injection model, proton injection is expected to be strongly suppressed at weak shocks; the relative difference between postshock, proton thermal and flow speeds is greater and so it is less likely for postshock protons to recross the shock front at weaker shocks \citep[e.g.,][]{kang2002}. Although postshock electrons move faster than protons, they are tied to magnetic field fluctuations more tightly because of smaller rigidities (i.e. $p_{\rm th,e}=(m_e/m_p)^{1/2} p_{\rm th,p}$). So it was speculated that the DSA of electrons at weak shocks would not be efficient either, for instance, not enough to explain the observed radio flux of spectacular giant radio relics such as the Sausage relic. A pre-exiting population of electrons with the Lorentz factor of $\gamma_e \sim 10-100$ was suggested as a possible solution to the low electron injection problem at weak cluster shocks \citep{kang2012,pinzke2013}. Moreover, \citet{kang2014} has suggested that a $\kappa$-like distribution of suprathermal electrons may exist in high beta ($\beta=P_g/P_B \sim 100$) ICM plasmas just like in solar winds, and facilitate the electron injection at weak shocks. Recently, using Particle-in-Cell (PIC) simulations of weak shocks in high beta plasmas, \citet{guo14} and \citet{park15} have shown that some of the incoming electrons gain energy via shock drift acceleration (SDA) and are reflected specularly toward the upstream region. Those reflected particles can be scattered back to the shock surface by plasma waves excited in the upstream region, and then undergo multiple cycles of SDA, resulting in a power-law type suprathermal population up to $\gamma_e \sim 100$. These studies suggest a possibility that ``self pre-acceleration'' of thermal electrons to suprathermal energies via kinetic plasma processes at the shock itself might provide seed electrons enough to explain the observed flux level of bright radio relics. The morphology of radio relics is, in some cases, observed to be elongated and arc-like with a sharp edge on one side. And some of radio relics are found as a pair in the opposite side of clusters. So they are often interpreted as products of binary mergers \citep[e.g.,][]{ensslin1998, roettiger1999, vanweeren2010, gasperin14}. Several numerical studies have suggested that merger shocks with sufficient amount of shock kinetic energy flux could produce radio relics \citep{nuza2012, vazza2012a, skillman2013}. In these studies, synthetic radio maps were constructed by identifying shocks in simulated clusters and modeling CR electron injection/acceleration and magnetic field strength. However, there remain a few issues to be resolved, before the simple picture of merger shocks being the origin of radio relics is accepted. {The first issue concerns the frequency of observed radio relics. Although structure formation simulations have demonstrated that shocks are produced frequently during mergers and they should last for the cluster dynamical time of $t_{\rm dyn} \sim 1$~Gyr, only about $10\%$ of X-ray luminous clusters host some radio relics, putative merger shocks, and the fraction of merging clusters with giant radio relics is even much lower \citep{feretti2012}. Recently, it has been suggested that ICM shocks may light up as radio relics only when they encounter fossil relativistic electrons that are left over from either a previous episode of shock/turbulence acceleration or a radio jet from AGN \citep{shimwell15,kangryu2015}. In such a scenario, only a small faction of ICM shocks become radio-emitting structures for a fraction of the dynamical time ($\la 0.1 t_{\rm dyn}\sim 100$~Myr). So the rareness of radio relics could be explained. Of course, this model can be justified only if the injection and acceleration of electrons at weak shocks in the ICM is very inefficient, so that the re-acceleration of pre-existing CR electrons must be required for the birth of radio relics.} {The second issue involves the discrepancies in the shock properties inferred from radio and X-ray observations of a few radio relics. Among several dozens of observed radio relics, only a fraction of shocks associated with them also have been detected in X-ray observations \citep[][and references therein]{nuza2012, bonafede2012}. In the case of the Toothbrush relic in 1RXS J0603.3, for exmaple, the radio index was measured to be $\alpha_{\rm inj}\approx 0.6-0.7$, indicating a `radio Mach number', $M_{\rm radio}\approx 3.3-4.6$ \citep{vanweeren2012}. But the temperature and density discontinuities in X-ray observations suggest an `X-ray Mach number', $M_{\rm X} \la 2$, and the position of shock identified in X-ray observations is shifted from that in radio observations by $\ga 200$ kpc \citep{akamatsu2013,ogrean2013b}. In addition, \citet{trasatti15} suggested that for the radio relic in A2256, if the observed index $\alpha_{63}^{1360}\approx 0.85$ measured between 63 and 1360~MHz is interpreted as the injection index, the radio Mach number can be estimated to be $M_{\rm radio}\approx 2.6$, while the temperature jump measured in X-ray observations implies the X-ray Mach number, $M_{\rm X}\sim 1.7$.} {In the case of the Sausage relic in CIZA J2242.8, \citet {vanweeren2010} used the observed radio spectral index near the edge (shock surface), $\alpha_{\rm inj}\approx 0.6$, to obtain a radio-inferred shock Mach number, $M_{\rm radio}\approx 4.6$. But X-ray-inferred Mach number reported later by Suzaku and Chandra observations turned out to be lower with $M_{\rm X-ray}\approx 2.54-3.15$ \citep{akamatsu2013, ogrean14}. Recently, however, \citet{stroe14a} estimated a steeper value, $\alpha_{\rm inj}\approx 0.77$, by performing a spatially-resolved spectral fitting, implying a weaker shock with $M_{\rm radio}\approx 2.9$ in good agreement with X-ray observations. } In \citet[][hereafter, Paper I]{hong2014}, we studied properties of shock waves in the outskirts of simulated galaxy clusters using sets of LSS formation simulations. {We find that in addition to merger shocks, infall shocks are produced in ICMs by continuous infall of density clumps (i.e. minor mergers) along filaments of galaxies to hot ICMs. Unlike weak bow shocks ($M_s \la 2$) driven by major mergers, infall shocks do not show pairing structures and have higher Mach numbers ($M_s \sim 3 - 10$). In a few cases (e.g., Coma 1253+275 and NGC 1265), observed radio relics are thought to be associated with infall shocks \citep{brown2011, pfrommer2011, ogrean2013a}.} {In this paper, we consider a scenario in which suprathermal electrons are injected directly at the shocks without a help of fossil CR electrons, and accelerated to radio-emitting energies. Then the frequency of radio relics depends on the shock statistics, while the radio luminosity of radio relics is determined by the shock kinetic energy flux and the assumed DSA efficiency model \citep[e.g.][]{kang2013}. In this scenario the rareness of radio relics, compared to the frequency of the ICM shocks, can be controlled by adjusting the models for the DSA acceleration efficiency and the magnetic field amplification. For instance, \citet{vazza2012a} showed, using structure formation simulations and generating mock radio maps of simulated cluster samples, that radio emission tends to increase toward the cluster periphery and peak around $0.2-0.5r_{\rm vir}$, mainly because the kinetic energy dissipated at shocks peaks around $0.2R_{\rm vir}$ and the Mach number of shocks tend to increase toward the cluster outskirts. Such findings can explain why radio relics are rarely found in the cluster central regions.} {We study the properties of radio and X-ray shocks in the simulated cluster sample of Paper I, based on a DSA model in which suprathermal electrons are injected and accelerated preferentially at shocks with higher Mach numbers.} We first calculate synchrotron emission at shocks by modeling primary CR electron population based on recent DSA simulations \citep{kang2013} and magnetic field distribution based on a turbulent dynamo model \citep{ryu2008}. We produce mock radio and X-ray maps by projecting the synchrotron and bremsstrahlung emission, respectively. We then identify ``synthetic radio relics'' in the radio map and extract their properties, such as radio and X-ray shock Mach numbers, $M_{\rm radio}$ and $M_{\rm X}$, and their locations in the map. {We examine the radio and X-ray properties of relic shocks, and attempt to understand discrepancies derived from radio and X-ray observations in some radio relics.} In \sref{numerics}, we present numerical details such as the calculation of synchrotron emission at shock, the construction of mock radio and X-ray maps, and the extraction of radio and X-ray shock Mach numbers. In \sref{projection}, we discuss how the two-dimensional (2D) projection of three-dimensional (3D) shock distributions affects the radio and X-ray observations of ICM shocks. In \sref{stats}, we describe the properties of shocks in 2D projection as well as the properties for synthetic radio relics. Summary follows in \sref{summary}. \section{Numerical Details}\label{sec:numerics} \subsection{Clusters and Shocks}\label{sec:numerics:cluster} The construction of galaxy cluster sample and the identification of shocks are described in detail in Paper I, so here they are summarized only briefly. The LSS formation simulations adopted a standard $\Lambda$CDM model with the following parameters: baryon density fraction $\Omega_\mathrm{BM} = 0.044$, dark matter density fraction $\Omega_\mathrm{DM} = 0.236$, cosmological constant fraction $\Omega_\Lambda = 0.72$, Hubble parameter $h \equiv H_0 / (100 \mathrm{km~s^{-1}Mpc^{-1}}) = 0.7$, rms density fluctuation $\sigma_8 = 0.82$, and the primordial spectral index $n = 0.96$. {An updated version of a particle-mesh/Eulerian hydrodynamic code described in \cite{ryu1993} was used for the simulations.} Three sets of simulations were performed: (1) 16 different realizations of LSS formation in a comoving cubic box of $L=100 \Mpch$ with $1024^3$ uniform grid zones, (2) 16 different realizations in a comoving cubic box of $L=200 \Mpch$ with $1024^3$ zones, and (3) one realizations in a comoving cubic box of $L=100 \Mpch$ with $2048^3$ zones. {The sets of (1) and (2) were performed in an adiabatic (non-radiative) way, while (3) includes a mild feedback from star formation and cooling/heating processes. In Paper I, we find that, in all three types of simulations, the statistics of the physical properties of clusters and shocks are similar.} In the simulation data, clusters are identified as regions around local peaks in the spatial distribution of X-ray emissivity. Around each peak, the X-ray emission-weighted mean temperature, $T_{\rm X,cl}$, is obtained for the spherical volume within $r\le \rtwo$. Here, $\rtwo\approx 1.3 r_{\rm vir}$ is the radius within which the gas overdensity is 200 times the mean gas density of the universe. Those with $k_B T_{\rm X,cl} \geq 2$ keV for $100 \Mpch$ box simulations and $k_B T_{\rm X,cl} \geq 4$ keV for $200 \Mpch$ box simulations are selected as synthetic clusters, resulting in a total of 228 clusters from the three sets of simulations. Here, $k_B$ is the Boltzmann constant Shocks (actually, shock zones) are identified by a set of criteria given in Paper I, and their sonic Mach numbers, $M_s$, are estimated from the temperature jump condition, ${T_2}/{T_1} = {(5M_s^2 -1)(M_s^2 + 3)}$ $/({16 M_s^2})$, where $T$ is the gas temperature. Hereafter, the subscripts ``1'' and ``2'' indicate the preshock and postshock quantities, respectively. Only shocks with $M_s \ge 1.5$ are considered. Note that a shock surface normally consists of many of these shock zones. The shock speed and shock kinetic energy flux at shock zone are, then, calculated as $v_1 = M_s (\gamma P_\mathrm{th, 1}/\rho_1)^{1/2}$ and $f_\mathrm{kin} = (1/2) \rho_1 v_1^3$, respectively, where $P_\mathrm{th,1}$ is the preshock thermal pressure and $\gamma = 5/3$ is the gas adiabatic index. The energy flux of CR protons can be estimated as \begin{equation} f_{{\rm CR},p} = \eta(M_s) \cdot f_\mathrm{kin}= \eta(M_s) \cdot (1/2) \rho_1 v_1^3, \label{fcrp} \end{equation} where $\eta(M_s)$ is the CR acceleration efficiency via the DSA process. For the efficiency, the values presented in \citet{kang2013} (also shown in Figure 2 of Paper I) are adopted. In Paper I, we find that the morphology of shock surfaces is quite complex due to the dynamic history of clusters, and in general a connected shock surface can consist of a number of grid zones of different types of shocks including merger shocks (see Figure 7 in Paper I). \subsection{Modeling of Magnetic Field Strength}\label{sec:numerics:magnetic} {To calculate radio synchrotron emission at cluster shocks, we need to model the strength of magnetic fields as well as the energy spectrum of CR electrons. The ICM is observed to be permeated with magnetic fields of a few $\mu$G in the cluster core region and a few $\times\ 0.1\ \mu$G in the cluster periphery \citep[e.g.][]{bonafede11,feretti2012}. Suggested ideas for the generation and amplification of magnetic fields in the ICM include processes during primordial phase transitions, Biermann battery mechanism, plasma processes at collisionless shocks, different types of turbulence dynamo, and the ejection of galactic winds and AGN jets \citep[e.g.][]{dolag08,ryu2008,ryu2012,widro12}.} {Here we adopt the turbulent dynamo model of \citet{ryu2008}, which assumes that turbulent flow motions are induced via stretching and compression of the vorticity generated behind curved surfaces of shocks in clusters, and the magnetic fields are amplified by the turbulent flows. Then the strength of the resulting magnetic fields can be modeled in terms of the number of local eddy turn-over by the following fitting formula: \begin{equation} {B^2 \over 8 \pi \epsilon_{\rm turb}} \equiv \phi(t/t_{\rm eddy}) \approx \left\{ \begin{array}{ll} 0.04 \cdot \exp[(t/t_{\rm eddy}-4)/0.36] & \textrm{ for } t/t_{\rm eddy} < 4 \\ (0.36/41) \cdot (t/t_{\rm eddy}-4) + 0.04 & \textrm{ for } t/t_{\rm eddy} > 4 \end{array} \right. , \label{Bdynamo} \end{equation} where $\epsilon_{\rm turb}$ is the turbulent energy density and $t_{\rm eddy}\equiv 1/|\vec{\nabla} \times \vec{v} | $ is the reciprocal of the vorticity calculated from the local flow speed. The fitting function $\phi$ represents the fraction of the turbulent energy transferred to the magnetic energy via turbulent dynamo, and it is derived from a magneto-hydrodynamic simulation \citep{ryu2008}. This model predicts that the magnetic field strength reaches to a few $\microGauss$ in the cluster center and decreases to $\sim 0.1\ \mu$G toward the cluster outskirts. This is in a good agreement with the observed magnetic field strength in actual clusters \citep[e.g.,][] {carilli2002, govoni2004,bonafede11}.} \subsection{Modeling of CR Electron Spectrum}\label{sec:numerics:spectrum} For the energy spectrum of CR electrons, we first assume that the CR acceleration at shock is described by the test-particle model \citep{drury1983}, since most of the shocks found in clusters are weak (see Introduction). Then, the momentum distribution of CR protons at the shock position is described by the power-law form for $p\ge p_{\rm min}$, \begin{equation} f_p(\pnorm) = f_{p0} \times \pnorm^{-q}~, \end{equation} where $\pnorm \equiv p/(\mproton c)$ is the dimensionless proton momentum, $\mproton$ is the proton mass, and $q = 3\sigma / (\sigma-1)$ is the spectral index. The shock compression ratio is determined by the sonic Mach number as $\sigma = {(\gamma +1 )M_s^2}/{[(\gamma -1)M_s^2 + 2]}$. The normalization factor $f_{p0}$ at each shock zone is determined by setting the CR energy flux through the shock zone as \begin{equation} f_{{\rm CR},p} = v_2 \cdot \mproton c^2 \int_{\pnorm_\mathrm{inj}} ^{\infty} \left(\sqrt{\pnorm^2+1}-1\right) f_p(\pnorm) d\pnorm^3 \label{fcrp2} \end{equation} where $v_2 = {v_1}/{\sigma}$ is the postshock flow speed and $f_{{\rm CR},p}$ is given in Equation (\ref{fcrp}). We do {\it not} consider the re-acceleration of pre-existing CRs in this work. Here, $p_{\rm min}=p_{\rm inj}$ is the injection momentum above which particles can participate in the DSA process. According to recent PIC and hybrid simulations, both protons and electrons initially can gain energies via SDA while confined between the shock front and the preshock region by scatterings due to self-generated upstream waves \citep[][see Introduction]{guo14,park15,caprioli15}. Particles should have the rigidity ($R = pc/e$) large enough to cross the shock front, i.e., $p_{\rm inj}\sim 3 p_{\rm th,p} \sim 130 p_{\rm th,e}$, in order to take part in the full first-order Fermi process. Here, $p_{\rm th,p}=\sqrt{2m_p k_B T_2}$ is {the most probable momentum of thermal protons} of postshock gas with temperature $T_2$, while {the most probable momentum of thermal electrons} is $p_{\rm th,e}=(m_e/m_p)^{1/2} p_{\rm th,p}$. In fact, $p_{\rm inj}$ is expected to depend on the shock Mach number, obliquity angle, and shock speed. Here, for the sake of simplicity, we set $\pnorm_\mathrm{inj} = 0.01$ in all shock zones regardless of the shock speed and Mach number and the ICM temperature. Since the DSA process operates on CR protons and electrons with the same rigidity in the same manner, the momentum distribution of primary CR electrons at shocks should follow that of CR protons, except less efficient injection and radiative cooling \citep[e.g.,][]{kang2011, park15}. The injection rate of CR electrons is expected be much lower than that of CR protons, because postshock thermal electrons need to be pre-accelerated from the thermal momentum to $p_{\rm inj}$ in order to get injected into the DSA process. Since the electron pre-acceleration is not yet fully constrained by plasma physics (despite of recent PIC simulations, see Introduction), it is often parameterized by the CR electron-to-proton ratio, $K_{e/p}$. Different types of observations have indicated a wide range of $K_{e/p}\sim 10^{-4} - 10^{-2}$ \citep[e.g.,][]{schlickeiser2002, morlino2009}. So we adopt the CR electron momentum distribution of $f_e(\pnorm) = K_{e/p}\cdot f_p(\pnorm)$ for $p\ge p_{\rm min}$ with $K_{e/p} \sim 0.01$. {Adopting a higher value of $K_{e/p}$, as suggested in some recent PIC simulations \citep{guo14, caprioli15}, will increase the amplitude of $f_e(\pnorm)$ and therefore the synchrotron radiation flux uniformly for all shocks.} At the same time of gaining energy via the DSA process at shock, CR electrons lose energy via synchrotron emission and IC scattering. As a results, an equilibrium momentum exists where the momentum gain is balanced by the radiative losses \citep{kang2011}: \begin{equation} \pnorm_\mathrm{eq} = {{m_e^2 c v_1}\over {m_p \sqrt{4 e^3 q/27}}} \left[{B_1\over{B_\mathrm{eff,1}^2+B_\mathrm{eff,2}^2}}\right]^{1/2}, \end{equation} where all the quantities except $\pnorm_\mathrm{eq}$ are expressed in cgs units and the electron equilibrium momentum is normalized as $\pnorm_\mathrm{eq} =p_\mathrm{eq}/m_pc$. Here, $B_\mathrm{eff} \equiv (B^2 + B_\mathrm{CBR}^2)^{1/2}$ takes account for the loss due to IC scattering of cosmic background photons ($B_\mathrm{CBR} = 3.24 (1+z)^2 \microGauss$) as well as the synchrotron loss. Then, the CR electron spectrum {\it at the shock location} has the form of $f_e(\pnorm) = K_{e/p}\cdot f_p(\pnorm)\cdot \exp \left[ -(\pnorm/\pnorm_\mathrm{eq})^2 \right]$. As the primary CR electrons are advected downstream in the postshock region, they also lose energy with the cooling time $t_\mathrm{rad} = (0.54 \Gyr) (B_\mathrm{eff,2}/5\microGauss)^{-2} \pnorm^{-1} $. So the cutoff momentum due to radiative losses decreases with the postshock distance ($d$) away from the shock front as $\pnorm_\mathrm{cut}(d) \propto \pnorm_\mathrm{eq}/d$. As a result, the downstream, integrated electron spectrum steepens by one power of $p$ as $f_e(\pnorm) \propto \pnorm^{-(q+1)}$ for $p\ge \pnorm_\mathrm{br}$, where the `break' momentum, \begin{equation} \pnorm_\mathrm{br} \approx 0.54 \left( {t_\mathrm{adv} \over 1\Gyr} \right)^{-1} \left( {B_\mathrm{eff,2} \over 5\microGauss} \right)^{-2}, \end{equation} is the lowest momentum up to which the postshock CR electrons have cooled down. In our simulation sample, a shock is defined as discontinuity within a grid zone of thickness $\Delta l = 48.8 -$ $ 195.3 \kpch$. The advection time for CR electrons to pass such thickness is $t_\mathrm{adv} = \Delta l / v_2 \sim 0.1-0.4~\Gyr$ for $v_2\sim 500 \kms$, which is of the order of the dynamical time of typical clusters or the merger time. For the CR electrons emitting at GHz in $\mu$G magnetic field ($\pnorm \sim 7$), the cooling time becomes shorter than the advection time ($t_\mathrm{rad}(\pnorm) < t_\mathrm{adv}$), so they have cooled down before existing the shock zone. Hence, we assume that the {\it volume-averaged} spectrum of CR electrons within each shock zone has the following form: \begin{equation} f_{e}(\pnorm) = \left\{ \begin{array}{ll} K_{e/p}\cdot f_{p0} \pnorm^{-q} & \textrm{ for } \pnorm < \pnorm_\mathrm{br} \\ K_{e/p} \cdot f_{p0} \pnorm_\mathrm{br} \pnorm^{-(q+1)} \cdot \exp \left[ -(\pnorm/\pnorm_\mathrm{eq})^2 \right] & \textrm{ for } \pnorm > \pnorm_\mathrm{br} \\ \end{array} \right. \,. \label{fep} \end{equation} {Note that we made several assumptions and simplifications such as the DSA efficiency in Equation (\ref{fcrp}), the magnetic field strength, $B$, in Equation (\ref{Bdynamo}), the electron to proton ratio, $K_{e/p}$, and the volume-averaged electron spectrum in Equation (\ref{fep}). Modifications to models for those will lead to rescaling of synchrotron radiation flux. So these may affect quantitatively some of the shock properties in the flux-limited, mock sample of radio relics, because a radio relic will consist of multiple shocks in the projected maps (see Section 2.4). But their impacts on the estimates of derived radio Mach numbers would be marginal, so the main results of this work should remain mostly unaffected (see Section 2.5).} \subsection{Mock Radio/X-ray Maps} We calculate the radio synchrotron emission at shock zones, using the magnetic field strength given in Equation (\ref{Bdynamo}) and the CR electron spectrum given in Equation (\ref{fep}). For a single electron with $\pnorm$, the synchrotron power at frequency $\nu$ is given by $P_{\nu,e} (\pnorm,\theta) = {(\sqrt{3}e^3 B \sin \theta)}/{(m_\mathrm{e} c^2)} \times F\left( {\nu}/{\nu_\mathrm{c}} \right)$, where $\theta$ is the angle between the electron momentum and magnetic field directions, $\nu_\mathrm{c} \equiv 3[ (\pnorm (m_p/m_e))^2 + 1] (eB \sin \theta) / (4\pi m_e c)$ is the characteristic frequency, and $F(x) \equiv x \int_x ^\infty d\xi K_{5/3}(\xi)$ ($K_{5/3}(x)$ is modified Bessel function). Then, the synchrotron emissivity (in cgs units of erg cm$^{-3}$ s$^{-1}$ Hz$^{-1}$ str$^{-1}$) in shock zone can be estimated as the sum of $P_{\nu, e}$ for all CR electrons of momentum spectrum $f_e(\pnorm)$ in the zone, assuming random angular distribution (i.e., $\langle \sin \theta \rangle = \pi / 4$), \begin{equation}\label{eq:synchrotron_emissivity} j_\nu = \frac{1}{4\pi} \int_{\pnorm_\mathrm{min}}^\infty \frac{\sqrt{3}\pi e^3 B_2}{4 m_e c^2} F\left( \frac{\nu}{\nu_\mathrm{c}(\pnorm)} \right) f_e(\pnorm) d^3 \pnorm \, . \end{equation} Note that $j_{\nu}$ is in fact the volume-averaged emissivity, because $f_e(\pnorm)$ in Equation (\ref{fep}) is averaged over the volume of the grid zone taking account for radiative cooling in the postshock region. The calculation of the bolometric X-ray emissivity due to thermal brem\-sstrahlung in the ICM is straightforward and can be done with \begin{equation} j_{\rm X} \approx (1.2\times 10^{-28}{\rm erg~cm^{-3}~s^{-1}~str^{-1}}) ~T^{1/2} ~ \left({\rho\over m_p}\right)^2, \end{equation} where the He abundance is assumed to be 7.9 \% by number (i.e., the mass fractions of $X=0.76$ and $Y=0.24$ ) and the Gaunt factor $g_{\rm ff} \approx 1.2$ for $T>10^7$K is used. Note that $j_X$ is calculated at all grid zones, while $j_{\nu}$ only at shock zones. Mock radio and X-ray maps of each synthetic cluster are constructed by projecting the synchrotron emissivity, $j_\nu$, and the bolometric X-ray emissivity, $j_{\rm X}$, along a depth of $D=4 \rtwo$. The choice of the depth should not affect our results, since both the synchrotron and X-ray emissions beyond $\rtwo$ from the cluster center are in general negligible. All line-of-sights (LoSs) are assumed to be parallel to each other, since the angular sizes of observed clusters and radio relics are usually much smaller than one radian. Assuming that the system is optically thin, the synchrotron/X-ray intensities, or surface brightnesses, are calculated by integrating the synchrotron/X-ray emissivities along each LoS, i.e., $I_\nu = \int_\mathrm{LoS}~j_\nu~dl$ and $I_{\rm X} = \int_\mathrm{LoS}~j_{\rm X}~dl$. Since the spatial distribution of shock zones in clusters is quite complicated, they can be translated into radio structures of different morphologies, depending on the projection direction \citep[e.g.,][]{vazza2012a, skillman2013}. So we choose 24 equally-spaced viewing angles for projection, resulting in 24 realizations of radio/X-ray maps for each of 228 synthetic clusters. As a result, a total of 5472 radio/X-ray maps are generated and used for the statistics below. Radio telescopes have finite resolutions, so in practice they measure the surface brightness convolved with telescope beams. If a beam has effective angular area $\theta^2$, then the measured synchrotron flux within the beam is approximately $S_{\nu_{obs}} \approx I_{\nu} \theta^2 (1+z)^{-3}$, where $\nu_\mathrm{obs} = \nu/(1+z)$ is the redshifted observation frequency. In this study, we adopt $z=0$ and a beam of $\theta^2 = 15\arcsec \times 15\arcsec$ for synthetic radio observation. The beam size is chosen to comparable to those of future radio surveys such as those of the Square Kilometre Array (SKA). Hereafter, we set $\nu_\mathrm{obs}=1.4$ GHz as the representative radio frequency, and present $j_{1.4}$ and $S_{1.4}$ at the frequency as radio quantities for synthetic observation. For the calculation of radio spectral index, two frequencies of 142 MHz and 1.4 GHz are used (see the next subsection). \subsection{``Derived'' Mach Numbers} With shocks of different Mach numbers and kinetic energy fluxes existing in clusters as mentioned in Introduction, the radio/X-ray structures projected to the sky may consist of shock surfaces of different characteristics. So it may not be straightforward to define the properties for observed structures such as radio relics. In our 3D simulation data cube, each `shock zone' is specified by the sonic Mach number, CR spectrum, and radio emissivity, while each zone is assigned with the gas density, velocity, temperature, magnetic field strength and X-ray emissivity, as described in previous subsections. In observations (real or synthetic), on the other hand, the physical properties of shocks must be extracted from 2D projections of radio/X-ray emissivities. Here, we describe the quantification of sonic Mach number $M_s$ of shocks associated with 2D projected radio/X-ray structures. We specify the \emph{derived} Mach numbers in two different ways. First, the {\it weighted} Mach numbers, $M_{1.4}^\mathrm{w}$ and $M_{X}^\mathrm{w}$, are defined as the averages along LoS, weighted by their synchrotron and X-ray emissivities, i.e., $M_{1.4}^\mathrm{w} \equiv {\int_\mathrm{LoS} j_{1.4} M_s dl}/{\int_\mathrm{LoS} j_{1.4} dl}$ and $M_{X}^\mathrm{w} \equiv {\int_\mathrm{LoS} j_{X} M_s dl}/{\int_\mathrm{LoS} j_{X} dl}$, respectively. We consider that these weighted Mach numbers represent the true properties of shocks associated with 2D projected structures. Second, the {\it observed} Mach numbers, $M_{1.4}^\mathrm{obs}$ and $M_{X}^\mathrm{obs}$, on the other hand, are designed to mimic the Mach numbers inferred from radio/X-ray observations. In 2D projected map, a `radio pixel' is the pixel that has at least one shock zone along its LoS and so has $S_{\nu} > 0$. For each radio pixel, $\Msynctwo$ is calculated from the integrated spectral index of synchrotron flux, i.e., $\alpha \equiv -{d \ln S_\nu}/{d \ln \nu} = (\Msynctwo ^2 + 1)/ ( \Msynctwo ^2 -1)$, obtained between $\nu_1=142$ MHz and $\nu_2=1.4$ GHz, following the same way that radio observers usually interpret their observed radio spectra (see Table 1). Note that the value of $\Msynctwo$ is not sensitive to the choice of low frequency with our model of CR electron spectrum. However, this practice is justified only when the break momentum of the volume-integrated electron spectrum is $\pnorm_\mathrm{br}\ll 1-10$, that is, when the break frequency in the integrated radio spectrum is $\nu_{\rm br}<100$~MHz. For the calculation of $\Mxtwo$, the X-ray emission-weighted temperature is calculated along each LoS as $T_X \equiv {\int_\mathrm{LoS} j_{\rm X} T dl}/{\int_\mathrm{LoS} j_{\rm X} dl}$ in mock maps of clusters. Pixels are tagged as `X-ray shocks', if $| \Delta \log T_X | > 0.11$, as in the shock identification scheme for 3D volume (see Paper I). For X-ray shocks, $\Mxtwo$ is estimated from the temperature jump, i.e. $T_{X,2}/T_{X,1} = (5 \Mxtwo ^2 -1)(\Mxtwo ^2 + 3)/(16 \Mxtwo ^ 2)$. The larger of the values of $\Mxtwo$ estimated along the two primary ($x$ and $y$) directions on projected maps is assigned as the observed Mach number, i.e., $\Mxtwo = \max (M_{X, x}^\mathrm{obs}, M_{X, y}^\mathrm{obs})$. Again, only X-ray shocks with $\Mxtwo \ge 1.5$ are considered. Note that a pixel identified as X-ray shock may not be a `radio pixel' and vice versa. In fact, only a small fraction ($\sim 7 \%$) of radio pixels are identified as X-ray shocks. Although the fraction may differ in real observations, this suggests that X-ray observations could miss a substantial fraction of radio shocks. Since it is caused by the smoothing in projection, this problem may be difficult to be overcome, even if the angular resolution of X-ray observation is improved. \section{Projection Effects on Radio/X-ray Map}\label{sec:projection} In this section, we examine how the projection affects the radio/X-ray observations of shocks such as the morphologies and derived shock parameters. As a representative example, we consider a cluster from $100 \Mpch$ box simulation with $2048^3$ grid zones. This cluster is identical to the one appeared in the left panels of Figures 1 and also in Figure 7 of Paper I. It has the X-ray emission-weighted temperature $k_B T_{X, {\rm cl}} \approx 2.7$ keV and $\rtwo \approx 1.81 \Mpch$. The shock that produces the largest amount of CR protons in this cluster is an infall shock with $M_s \approx 5$ and $f_\mathrm{CR} \approx 1.5 \times 10^{47}~\mathrm{erg~s^{-1}} (\Mpch)^{-2}$ (see Paper I). \fref{map} shows projected maps of the cluster, viewed from four different angles, zoomed around the area of $r \leq \rtwo$ (dashed circle), where $r$ is the distance from the cluster center in the projected plane. Here, the synchrotron flux, $S_{1.4}$, is shown as contours of black solid lines, which is superposed with the bolometric X-ray surface brightness, $\Ixbol$, in gray-scale. The lowest contour level for the synchrotron flux is $S_\mathrm{1.4, min} = 10^{-2}\mJy$, which could be detected by future radio observatories such as the SKA. The projection direction of each map is given in terms of polar and azimuthal angles, $\theta$ and $\phi$, respectively. The four maps in \fref{map} exhibit morphologies of radio structures, distinct from each other. For instance, the radio map in the upper-left panel seems to show a paired structures on the opposite side of the cluster, which could be interpreted as radio relics due to a pair of merger shocks. Here, we tag the radio pixels associated with the left structure as ``L-relic'' (red color), while those associated with the right structure as ``R-relic'' (green color). In other three panels, if the shock zone with the largest synchrotron contribution along a given LoS belong to either L-relic or R-relic, we color the corresponding radio pixel as red or green, respectively. In the upper-right and lower-left panels, elongated radio structures with length $ \ga 1-2 \Mpch$ are composed of both red (L-relic) and green (R-relic) pixels. This illustrates that even a single connected radio structure in the sky may consist of a number of disconnected shock surfaces in the real 3D volume. So it could be misleading, if we try to extract the nature of underlying shocks only from the morphology or shock parameters inferred from radio observations. The radio map in the lower-right panel, on the other hand, has a number of small structures, and the structure located near the center could be interpreted as so-called ``radio mini-halo'' \citep[see, e.g.,][for reviews]{feretti2012, bruggen2012}. Here, we do not intend to address the natures of observed paired structures or halo-like structures in detail, but we just point that the morphology of observed radio structures critically depends on the projection of underlying 3D structures. In \fref{slice}, we compare the spatial distributions of $T$, $\rho$, and $M_s$ in a slice (left panels) with those of $T_X$, $\Msynctwo$, and $\Mxtwo$ in 2D projection (right panels). The slice passes through $0.37 \Mpch \sim 0.2 \rtwo$ away from the cluster center, and contains the most radio-luminous shock zones for the L and R-relics, which are projected at $(-0.4 \Mpch,$ $1.0 \Mpch)$ and $(0.4 \Mpch,$ $0.1 \Mpch)$ in the 2D maps, respectively. They are identified as infall shocks with $M_s \approx 5.7$ for L-relic and $M_s \approx 3.9$ for R-relic. The right panels are from the same map as that for the upper-left panel of \fref{map}, and the contours are drawn with the level of $S_\mathrm{1.4, min}$. For $\Msynctwo$, only pixels with $S_{1.4}\ge S_\mathrm{1.4, min}$ are plotted, while for $T_X$ and $\Mxtwo$, only pixels with $I_{\rm X} \ge I_{\rm X, min}=10^{-10} {\rm erg~s^{-1}~cm^{-2}~str^{-1}}$ are plotted. Note that the bolometric X-ray surface brightness in outskirts near $r\sim \rtwo$ ranges typically $1-10\times 10^{-9} {\rm erg~s^{-1}~cm^{-2}~str^{-1}}$ in observed clusters \citep[e.g.,][]{ettori09}. Comparison of $M_s$ in the slice map (left-bottom) and $\Msynctwo$ in the 2D map (right-middle) indicates that the Mach number and location of shocks with $M_s \ga 3$ agree well in the two distributions, especially for the shocks associated with L-relic. This is because the radio spectrum in projected maps is governed by one or a few radio-luminous shock zones with high Mach numbers in a given LoS. So $\Msynctwo$ derived from radio observations could be a good proxy for the Mach number of radio-luminous shocks, implying that the properties of such shocks could be extracted reasonably well from radio observations. On the other hand, there are noticeable differences between the distributions of $T$/$M_s$ in the slice maps (left-top/bottom) and those of $T_X$/$\Mxtwo$ in the 2D maps (right-top/bottom). As seen in top panels, the distribution of $T_X$ is smoother than that of $T$, partly because the contribution from the X-ray bright ICM dominates over the emissions from the WHIM in surrounding filaments, and also because the projection (integration along LoSs) inevitably irons out any sharp features. The temperature jump across a shock in the $T_X$ map looks reduced. We find that $\Mxtwo$ derived from X-ray observations tends to underestimate the actual shock Mach number (see the next chapter). We further examine in detail the region near L-relic, where the slice maps indicate a shock of $M_s \approx 5 - 6$ formed by the infall of the WHIM along a filament. There is $\sim 100 \kpch$ of positional shift between the locations of shocks in the $\Msynctwo$ (right-middle) and $\Mxtwo$ (right-bottom) maps. This is because $\Msynctwo$ picks up the infall shock, while due to the dominant contribution of X-ray from the hot and dense ICM along the LoS, $\Mxtwo$ picks up a foreground shock with smaller Mach number formed in the ICM (sse below). This may explain spatial offsets between the shock surfaces inferred from radio and X-ray observations in some radio relics \citep[e.g.,][]{akamatsu2013,ogrean2013b}. \fref{3d} shows the distributions of radio and X-ray emissivities, $j_{1.4}$ and $j_X$, as functions of $M_s$ for shocks within $r \leq \rtwo$ in our simulated clusters. Both distributions have rather large spreads, but the following points are clear, Firstly, $j_X$ decreases with increasing $M_s$, while $j_{1.4}$ increases with $M_s$, peaks at $M_s \sim 5$ and decreases slightly for higher $M_s$. These behaviors can be understood as follows. Weaker shocks which tend to form in hot and dense regions near the cluster core produce more $j_X$, while shocks formed in cluster outskirts with $M_s \sim$ several accelerate CRs and produce $j_{1.4}$ most efficiently. Secondly, on average, $j_X$ varies over two orders of magnitude, while $j_{1.4}$ varies over 10 orders of magnitude. Although both radio/X-ray surface brightnesses on projected maps are dominated by contributions from a small number of bright shock zones along each LoS, the tendency is much stronger in the case of radio. \section{Shock Properties Derived from Radio/X-ray Maps}\label{sec:stats} \subsection{Shock Pixels in Projected Maps}\label{sec:stats:point} In this section, we first examine the surface brightnesses and derived Mach numbers of pixels with shocks, i.e $\Ssync$, $M_{1.4}$, $I_X$ and $M_X$, obtained in projected radio/X-ray maps. As noted in Section 2.5, radio pixels which contain shocks in radio maps may not be identified as shocks in X-ray maps, and vice versa. So when we study a correlation between any two quantities, for instance, $\Msynctwo$ versus $\Mxtwo$, we use a subset of pixels in which both quantities are specified. \fref{xlum_sync_mach_all} displays the relations between $\Ssync$ vs $I_X$, $\Msynctwo$ vs $\Ssync$, and $\Mxtwo$ vs $I_X$ for pixels within $r \leq \rtwo$ from the cluster center. The bottom panel shows that bright X-ray shocks with large $I_X$ tend to be weak with small $\Mxtwo$; the brightest X-ray shocks have $\Mxtwo \la 2$. The middle panel, on the other hand, shows that the synchrotron flux $\Ssync$ of radio pixels increases with $\Msynctwo$ up to $\sim 4$ and then decreases for larger $\Msynctwo$. These behaviors should be guessed from those of $j_{1.4}$ and $j_X$ shown in \fref{3d}. As a result, the correlation between $\Ssync$ and $\Ixbol$ turns out to be rather poor with wide variations in the top panel. What is clear is that the shocks which are brightest in radio are not necessarily brightest in X-ray. As a matter of fact, brightest X-ray shocks have modest radio brightnesses, and vice verse. \fref{machs_all} displays the relations among mock-observed Mach numbers, $\Msynctwo$ and $\Mxtwo$, and weighted Mach numbers, $\Msyncthree$ and $\Mxthree$, for pixels within $r \leq \rtwo$. As noted in the previous chapter, the X-ray Mach numbers tend to be smaller than the radio Mach numbers (upper panels). Again, this is mainly because X-ray observations incline to pick up weaker shocks than radio observations along given LoSs. The correlation between $M_{1.4}$ and $M_X$ is rather poor; the Pearson's linear correlation coefficient of $\Msynctwo$ and $\Mxtwo$ is $r(\log \Msynctwo, \log \Mxtwo)=0.11$ (upper-left panel) and that of $\Msyncthree$ and $\Mxthree$ is $r(\log \Msyncthree, \log \Mxthree) = 0.22$ (upper-right panel). So it could be misleading if $\Mxtwo$ is guessed from $\Msynctwo$, and vice versa. It is expected that $M_{\rm obs}$ and $M_{\rm w}$ for the same band are correlated better. We find that $\Msynctwo$ is often smaller than $\Msyncthree$ as shown in the lower-left panel, but the correlation between them is quite good with $r(\log \Msynctwo, \log \Msyncthree) = 0.77$. The correlation between $\Mxtwo$ and $\Mxthree$ in the lower-right panel, on the other hand, is still poor with $r(\log \Mxtwo, \log \Mxthree) = 0.24$. This demonstrates that the smoothing in projection would make the estimation of shock properties harder in X-ray observations than in radio observations. In \fref{rad_all}, we examine the radial distributions of observable properties of shock pixels. Both $\Msynctwo$ and $\Mxtwo$ tend to increase with radius, and $\Msynctwo$ shows a larger variance than $\Mxtwo$. Shocks with X-ray Mach number $\Mxtwo \ga$ a few would be rare within $r \leq \rtwo$, while shocks with radio Mach number $\Msynctwo$ up to several could be found. Radio bright shocks, for instance, those with $\Ssync\ge S_\mathrm{1.4, min} = 10^{-2}\mJy$, are found mostly in $r \ga 0.2 \rtwo$. This suggests that future radio observations could detect many more shock structures in cluster outskirts. On the other hand, X-ray bright shocks are preferentially located and so found close the center. The bolometric X-ray surface brightness of X-ray shocks follows $\Ixbol \propto (r/\rtwo)^{-\beta}$ with $\beta\approx 3-5$ for $r/\rtwo > 0.3$, which is consistent with the observed X-ray profiles in cluster outskirts \citep[e.g.,][]{ettori09}. \subsection{Radio Relics in Projected Maps} We build up a catalog of ``synthetic radio relics'' by finding connected structures in our radio maps, which meet the following conditions. (1) All pixels within structures have $\Ssync \geq S_\mathrm{1.4, min}$. (2) A structure has at least five pixels. Note that the area of a single pixel is $\Delta A=(L/N_g)^2$ ($N_g=1024$ or $2048$). So five pixels corresponds to, for instance, $A_\mathrm{min} \approx 0.05 (\Mpch)^2$ in simulations of $L = 100 \Mpch$ box with $1024^3$ grid zones. Using these conditions, we have 9583 radio relics samples: 4626 from simulations of $L = 100 \Mpch$ with $1024^3$ grid zones, 4850 from simulations of $L = 200 \Mpch$ with $1024^3$ grid zones, and 107 from simulation of $L = 100 \Mpch$ with $2048^3$ grid zones. We then assign the following quantities to synthetic radio relics. The average distance from the cluster center, $\rrelic$, is defined by the synchrotron-weighted average of the distance of the pixels that belong to a radio relic, $\rrelic \equiv {\sum_{\rm pixel} r \Ssync}/{\sum_{\rm pixel} \Ssync }$, where $r$ is the distance of pixels from the cluster center. The derived Mach numbers of radio relics are defined by the synchrotron/X-ray weighted averages of the corresponding derived Mach numbers of pixels, $\Msynctwo$, $\Msyncthree$, $\Mxtwo$, and $\Mxthree$. For example, $M_{1.4,\rm relic}^{\rm obs} \equiv {\sum_{\rm pixel} \Msynctwo \Ssync}/{\sum_{\rm pixel} \Ssync }$ and $M_{X,\rm relic}^{\rm obs} \equiv {\sum_{\rm pixel} \Mxtwo I_X}/{\sum_{\rm pixel} I_X }$. The number of pixels with $\Mxtwo$ could be smaller than that with $\Msynctwo$, since $\Mxtwo$ is specified only at the pixels identified as X-ray shocks. In fact, about 40\% of sample radio relics contain no pixel with $\Mxtwo$, so the corresponding X-ray Mach number is not assigned to them. The synchrotron power at 1.4 GHz and bolometric X-ray luminosity of a radio relic are calculated with $\Psync = 4\pi \Delta A \sum_{\rm pixel} \Ssync$ and $\Lxbol = 4\pi \Delta A \sum_{\rm pixel} I_X $, respectively. \fref{machs_relic} displays the relations among the derived Mach numbers for sample radio relics. The relations follow and so look similar to those for pixels in projected maps shown in \fref{machs_all}, but the correlations are tighter with $r(\log \Msynctworelic,$ $\log \Mxtworelic)$ $= 0.65$, $r(\log \Msyncthreerelic,$ $\log \Mxthreerelic)$ $= 0.64$, $r(\log \Msynctworelic,$ $\log \Msyncthreerelic)$ $= 0.99$, and $r(\log \Mxtworelic,$ $\log \Mxthreerelic$)$ = 0.38$. Especially, the correlation between $\Msynctworelic$ and $\Msyncthreerelic$ is very good, indicating that the Mach number estimated from radio spectral index would be a fair representation of suitably averaged Mach number of shocks associated with radio relics. Most of our sample radio relics have $\Msynctworelic \ga 2.5$. This is partly because with the model CR acceleration efficiency we employ \citep{kang2013}, the amount of CR electrons emitting synchrotron is very small for weak shocks with $M_s \la 2.5$ (see Figure 3 of Paper I). If the CR electron acceleration at weak shocks would be more efficient than in our model, radio relics with smaller $\Msynctworelic$ could be more common. As noted in Introduction, Merger shocks are expected to have mostly $M_s \la 3$. In fact, we find that our synthetic radio relics normally involve projections of multiple shocks along LoSs, resulting in different morphologies for different viewing angles (see \fref{map}). In $\sim 40\%$ of sample radio relics, the brightest pixels with largest $\Ssync$ include infall shocks along LoSs, suggesting that infall shocks may account for some radio relics with flat spectra. This seems reasonable in the sense that infall shocks could be the major sources of CRs in clusters, as discussed in Paper I. \fref{rad_relic} displays the radial distributions of $\Msynctworelic$, $\Mxtworelic$, $\Psync$, and $\Lxbol$. Upper panels show that most of our synthetic radio relics are found at $\rrelic \ga 0.2 \rtwo$. Although there are wide variations, $\Msynctworelic$ and $\Mxtworelic$ tend to increase toward outskirts, and on average the radio Mach number is larger than the X-ray Mach number. The lower-left panel shows that the radio power at a given radial distance can vary widely, yet most powerful radio relics are found at $0.3\la \rrelic/\rtwo \la 0.7$. This results mainly from the combined effects of the radial distribution of shock kinetic energy flux and $M_s$, the strong dependence of CR acceleration efficiency on $M_s$, and the geometrical increase of relic surface area toward outskirts \citep[see also][]{vazza2012a}. The lower-right panel shows a strong radial decline of the bolometric X-ray luminosity of sample radio relics (not including the contributions from the background ICM) toward outskirts. This reflects mostly the radial decreases of gas density and temperature; the effect of the increase of the Mach number and relic surface area is not large here. \subsection{Comparison with Observational Data} We compare our synthetic radio relics with observed radio relics. Table 1 lists the properties of the observed radio relics we use; here, $\zcl$ is the redshift of associated clusters, $\lrelic$ is the largest linear scale on the sky, and $\alpha$ is the integrated radio spectral index. They are chosen from \citet{vanweeren2009} and \citet{bonafede2012} with the following criteria; (1) $\zcl \la 0.2$ (our radio relic sample is constructed at $z=0$), (2) $\lrelic \ge A_\mathrm{min}^{1/2}$ with $A_\mathrm{min} = 0.05 (\Mpch)^2$, and (3) both $\Psync$ and $\alpha$ available. Note that $\alpha$ in Table 1 was calculated between $74$ MHz and $1.4$ GHz. Although some observations suggested that the radio spectra of radio relics could have curvatures and $\alpha$ may increase at high frequencies $\ga 10$ GHz \citep[e.g.,][]{vanweeren2009,stroe13, trasatti15}, here we ignore the effect. \fref{relic} shows $\Psync$ vs $\rrelic$ and $\alpha$ for synthetic (colors) and observed (filled squares) radio relics. The distribution of $\Psync$ of observed radio relics seems to fall within the range predicted by our synthetic observation. On the other hand, there are noticeable differences in the distributions of $\rrelic$ and $\alpha$ between synthetic and observed radio relics. First, four observed radio relics are found with $\rrelic \leq 0.43$ Mpc among twelve, while the fraction of synthetic radio relics with $\rrelic \leq 0.43$ Mpc is very small, less than $\sim 1 \%$. Second, while the integrated spectral index of synthetic radio relics lies $1.0 < \alpha \la 1.38$ (corresponding to $ M_s \ga 2.5$), some of observed radio relics have steeper spectral indices. For example, the radio relics found at Abell 548b B and Abell S753 have $\alpha \ga 2$ ($M_s \la 1.7$), and the radio relic at Abell 2345W has $\alpha\approx 1.5$ ($M_s \approx 2.1$). The discrepancies could be partly because weak shocks close to cluster core might be under-represented in our structure formation simulations due to the limited spatial resolution and omission of some non-gravitational processes. But it could be also possible that weak shocks accelerate CR electrons more efficiently than we assume here, as hinted in recent PIC simulations \citep[see, e.g.][]{guo14,park15}. {On the other hand, the re-acceleration of ``pre-existing'' CR electrons, which is not considered in this study, could play an important role in the production of CR electrons at weak shocks with $M_s \la 2$ \citep[e.g.][]{kang2012, pinzke2013}. Especially, if fossil relativistic electrons are present in the form of isolated clouds, instead of being spread throughout the general ICM space, the relative rareness of radio relics can be understood, as discussed in the Introduction \citep{shimwell15,kangryu2015}. In such a scenario, `lighting-up' of radio relics is governed mainly by the occasional encounters of ICM shocks with fossil electron clouds, instead of the Mach-number-dependent DSA efficiency and the shock kinetic energy flux. As a result, the frequency and radio luminosity of radio relics may depend less sensitively on the shock Mach number, and much weaker shocks could be turned on as radio relics with steeper spectral index. These features are more consistent with the observational data in Figure 9. Moreover, only a small fraction of ICM shocks become radio sources for the period much shorter than the cluster dynamical time sale \citep{kangryu2015}. So the model naturally explains why radio relics are rare, while structure formation simulations predict shocks should be ubiquitous in the ICM \citep[e.g.][]{ryu2003,vazza2012a}. The exploration of this scenario requires extensions of the models we employ here, so beyond the scope of this paper.} \section{Summary}\label{sec:summary} The existence of shocks in clusters of galaxies has been established through observations of radio relics due to synchrotron emission from shock-accelerated CR electrons \citep[e.g.,][]{bonafede2012} as well as through X-ray observations of shock discontinuities \citep[e.g.,][]{markevitch2007}. However, it is not clear whether the properties of shocks inferred from radio and X-ray observations represent their true nature, because we can only measure the quantities projected onto the sky, integrated along LoSs. {In fact, for a few relics such as the Tooth brush relic and the radio relic in A2256, radio and X-ray observations have reported inconsistencies in quantities such as shock Mach number and position \citep{akamatsu2013, ogrean2013b,trasatti15}.} {In this paper, we explored a scenario in which electrons freshly injected and accelerated to high energies via DSA process at the structure formation shocks produce radio relics. We constructed synthetic maps of galaxy clusters in radio and X-ray by employing the cosmological hydrodynamic simulation data reported earlier in Paper I.} First, the volume-averaged synchrotron emissivity at shock zones was calculated by adopting the CR electron acceleration efficiency based on a DSA model \citep{kang2013} and magnetic fields based on a turbulent dynamo model \citep{ryu2008}. The bolometric X-ray emissivity at grid zones was also calculated using the gas density and temperature from simulations. Then, mock maps of the synchrotron and X-ray surface brightnesses in 2D projection were produced by integrating volume emissivities along LoSs for simulated clusters. In the synthetic maps, the properties of shocks and radio relics, such as the shock Mach number and location, were examined in detail. The main findings can be summarized as follows. {1) In most cases, radio and X-ray shocks in 2D maps are the outcomes of projection of multiple shock surfaces along LoSs, because shocks are abundant in the ICM with a mean separation of shock surfaces, $\sim 1\Mpch$. As a result, the morphology of shock distributions in 2D maps depends on the projection direction; for the same cluster, very different morphologies may turn out for different viewing angles.} {2) Synchrotron emissivity depends sensitively on the shock Mach number, especially with our DSA model for the CR electron acceleration, while bremsstrahlung emissivity depends on the gas density and temperature. Hence, radio observations tend to pick up shocks of $M_s \sim$ a few to several along a LoS, while X-ray observations preferentially select weaker shocks ($M_s \la 2$) with high density and temperature. Consequently, the properties of shocks in 2D projected maps could be different in radio and X-ray observations. The shock Mach number estimated with X-ray observations tends to be smaller than that from radio observations, if a radio relic consists of multiple shocks along LoSs. In addition, the location of shocks from X-ray observations could be shifted with respect to that from radio observations.} {3) For radio relics, the Mach number estimated from the radio spectral index seems to be a fair representation of suitably averaged Mach number of shock surfaces associated with them. On the other hand, the discontinuities in X-ray temperature tend to be smeared due to projection effects including possible multiple shocks and multiple ICM components, resulting in the X-ray Mach number lower than the real Mach number of the associated shock.} {4) When the properties of our synthetic radio relics are compared with those of observed radio relics, there are clear differences in the statistics of radial location and spectral index; more radio relics have been observed closer to the cluster core and with steeper spectral indices than our synthetic observation predicts. This discrepancy may imply that weak shocks in high beta plasmas in fact accelerate CR electrons more efficiently than we model here \citep[see, e.g.][]{guo14,park15}. Alternatively, as shown in the previous studies of \citet{kang2012} and \citet{pinzke2013}, the re-acceleration of pre-existing electrons in the ICM may enhance the production of CR electrons at weak shocks with $M_s \la 2$. Finally, we could also conjecture that radio relics might be activated only when shocks encounter clouds, which contain fossil electrons either accelerated earlier by shocks/turbulence or left over from old radio jets \citep[see, e.g.][]{shimwell15, kangryu2015}. This model may explain how very weak shocks can turn on the synchrotron emission and why radio relics are rare relative to putative shocks in galaxy clusters.} \begin{acknowledgements} {The authors thank the anonymous referee for his/her thorough review and constructive suggestions that lead to an improvement of the paper.} SEH was supported by the National Research Foundation of Korea through grant 2007-0093860. HK was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (2014R1A1A2057940). The work of DR was supported by the year of 2014 Research Fund of UNIST (1.140035.01). \end{acknowledgements}
{ "timestamp": "2015-09-03T02:05:12", "yymm": "1504", "arxiv_id": "1504.03102", "language": "en", "url": "https://arxiv.org/abs/1504.03102" }
\section{Introduction} \label{sec:introduction} Understanding hydrodynamic phenomena at the nanoscale has become increasingly important with the advent of numerous new technologies enabled by nanofabrication methods \cite{gates2005,bocquet2010}. Predicting hydrodynamic forces and volumetric rates in nanoscale flows is particularly relevant to the design of nano-electromechanical systems (NEMS), nanowire nanosensors, and nanofluidic devices for applications that range from bioengineering to materials science and renewable energy \cite{hong2003,mijatovic2005,patolsky2005,song2008,ekinci2010,JFM2010}. At length scales in the order of one nanometer, which corresponds roughly to the size of three water molecules, fundamental assumptions upon which classical hydrodynamic equations are derived need to be thoroughly evaluated. For example, experimental studies indicate that solid-like transitions and viscoelastic behavior can arise for simple liquids confined in nanoscale lubrication gaps \cite{israelachvili1989,gee1990,klein1995}. Transitions to viscoelastic behavior of simple fluids are found to occur when hydrodynamic time scales are comparable to the relaxation time of the fluid, which in micro/\-nanoscale confinement can exceed by several orders of magnitude its bulk value \cite{hu1991,POF2009}. Another fundamental issue for continuum-based descriptions of micro/\-nanoscale flows is the difficulty to determine proper boundary conditions (e.g., no slip, Navier/\-Maxwell slip) that are determined by physicochemical properties of the fluid and the nanoscale structure of the confining solid surfaces \cite{neto2005,lauga2007,colosqui2013slip}. In this context, molecular dynamics (MD) simulations have become a valuable tool to study hydrodynamic phenomena in nanoscale confinement and to help in developing and validating continuum-based descriptions for nanoscale flows. Previous works have demonstrated the use of different MD techniques (e.g., equilibrium/\-non-equilibrium) to determine hydrodynamic forces, boundary conditions, and transport coefficients resulting from complex interfacial phenomena in nanoscale confinement \cite{koplik1995,thompson1992,koplik1988,travis2000,cieplak2001,leng2005,leng2006}. Along similar lines, the present work resorts to fully atomistic non-equilibrium MD simulations in order to develop (continuum-based) hydrodynamic descriptions that yield quantitative predictions for nanoscale lubrication flows past micro/\-nanoscale bodies that are perfectly static or subject to thermal motion. Hydrodynamic lubrication theory is suitable to study flows within lubrication films having nonuniform thickness and arbitrary shape. Hydrodynamic models for nanoscale films, however, can encounter significant limitations due to complex interfacial phenomena that arise near fluid-solid interfaces. For example, nanoscale roughness of the confining solid surfaces can produce lubrication forces resulting from both liquid-solid and solid-solid friction. Moreover, layering of liquid atoms at the surface of a crystalline solid and the dynamic rearrangement of the induced solid-like structures can lead to strong structural forces and so-called ``stick-slip'' motion\cite{thompson1990,reiter1994,bhushan1995,urbakh2004,lei2011}. Numerous experimental studies using a surface force apparatus or atomic force microscope have probed and characterized lubrication forces in molecularly-thin films at different shear rates \cite{luengo1996,carpick1997,kavehpour2004,ruths2008,bonaccurso2008,israelachvili2010}. Notably, in the case of liquids with simple molecular structure (e.g., molecular chains such as n-alkanes) confined by molecularly smooth surfaces, tribological studies report that the shear viscosity within the lubrication film does not vary significantly with respect to the liquid bulk value for shear rates as high as 10$^5$ 1/s and lubrication gaps as small as ten molecular diameters \cite{israelachvili1989,ruths2011,al2013effects}. Furthermore, these studies indicate that the slip plane lies within one molecular diameter from the solid surface independently of the presence of electrostatic double-layers or structural forces. For the particular case of water confined by smooth silica surfaces the same conclusions hold for lubrication gaps as small as 2 nm (i.e., about the size of six water molecules) \cite{israelachvili1989,horn1989,kuhl1998,raviv2004}. Hence, experimental evidence indicates that hydrodynamic lubrication theory is able to yield reliable predictions for atomically smooth surfaces and molecularly-thin lubrication gaps (i.e., as thin as five to ten molecular layers) under a wide range of flow conditions (e.g., Couette-type flows with moderate-to-high shear rates). In this work we study creeping flow of a simple molecular liquid past a colloidal solid cylinder confined in a slit channel and lying at arbitrary distances from the channel centerline. In Sec.~II we first we study the case where the cylinder is perfectly static, employing hydrodynamic lubrication theory we obtain analytical expressions for the drag forces and flow rates in finite channels. The studied problem involves two lubrication gaps of variable height that can become molecularly thin as the cylinder approaches contact with a channel wall. In Sec.~III we describe the MD technique employed and simulations performed to provide a microscopic description of nanoscale flows, without relying on conventional continuum assumptions. In Sec.~IV we assess the validity of the hydrodynamic lubrication approach for the case of static cylinders of micro/nanoscale dimensions. Drag forces and volumetric flow rates predicted for symmetric and asymmetric confinement, in channels with different lengths, are compared against numerical solution of the Navier-Stokes (N-S) equations and fully atomistic MD simulations. In Sec.~V, employing predictions obtained for static conditions we study the case of a colloidal cylinder that performs random displacements induced by thermal motion. The approach in this section indicates ways in which hydrodynamic lubrication models can be applied to predict mean drag forces and flow rates for confined nanoscale bodies (e.g., nanoparticles, macromoleucles, nanowires, nanobeams) that are subject to thermal motion. \section{Poiseuille flow past a static cylinder arbitrarily confined} \label{sec:introduction} The geometry of the studied flow problem is illustrated in Fig.~\ref{fig:1}, a circular cylinder of radius $R$ is fully confined within a slit channel of height $H$, width $W \gg H$, and length $L\gg H$. Under studied conditions the flow is assumed to be steady, two-dimensional, incompressible, isothermal, and Newtonian; the fluid density $\rho$ and shear viscosity $\mu$ are thus assumed constant. The cylinder center is located at ($x=0,y=y_c$) and thus lies at a vertical distance $\delta \times H=H/2-y_c$ from the channel centerline (cf. Fig~\ref{fig:1}). To characterize the studied flow we will employ the confinement ratio \begin{equation} \label{eq:confinement_ratio} k=\frac{2R}{H} \end{equation} and the dimensionless off-center displacement, or asymmetry parameter, \begin{equation} \label{eq:asymmetry_parameter} \delta=\frac{1}{2}-\frac{y_c}{H}. \end{equation} In addition, the dimensionless channel length $l=L/H$ will be employed to characterize finite length effects on the volumetric flow rate and drag force for long but finite channels ($l\gg1$). \begin{figure}[h] \vspace{-0pt} \begin{center} \includegraphics[width=0.65\textwidth]{fig1.pdf} \end{center} \vspace{-0pt} \caption{Plane Poiseuille flow past a confined cylinder with arbitrary off-center displacement ($|\delta|\leq(1-k)/2$) and high confinement ratio ($k=2R/H>0.5$). Flow at a constant volumetric rate is driven by a pressure differential ($\Delta p=p_{in}-p_{out}$) and/or a constant body force ($\rho g$) in the $x$-direction. The channel is considered to be sufficiently long to develop a parabolic velocity profile $U(y)=U_{max} (1-\delta^2)$ at the inlet and outlet ($x=\pm L/2$). \label{fig:1}} \vspace{-0pt} \end{figure} Flow in the $x$-direction at constant volumetric flow rate (per unit width) $Q$ [m$^2$/s] is sustained by a driving force $G/W=(p_{in}-p_{out}) H + \rho g (H L -\pi R^2)$; here, $p_{in}$ and $p_{out}$ are the static pressures at the channel inlet and outlet, respectively, and $\rho g$ is a constant body force active on the fluid phase. The channel is assumed to be sufficiently long so that a parabolic velocity profile $u(\pm L/2,y)=U_{max} (1-\delta^2)$ with $U_{max}=3Q/2H$ is established at the channel inlet and outlet. The studied conditions correspond to creeping flows with very small Reynolds numbers $Re=\rho U_{max} H/\mu \ll 1$. The drag coefficient is thus defined as \begin{equation} \label{eq:drag_coefficient} \lambda(k,\delta)=\frac{D}{\mu U}, \end{equation} where $D$ is the drag force per unit width and $U=u(-L/2,y_c)$ is the velocity of the ``unperturbed'' flow velocity upstream of the cylinder center. For the case of symmetric confinement ($\delta=0$) and moderate confinement ratio $0.2\lesssim k\lesssim 0.5$ there are available expressions for the drag coefficient obtained from approximate solution of the Stokes equations with no-slip boundary conditions: \begin{equation} \lambda(k,0)=\frac{4\pi}{A_0-(1+0.5k^2+A_4k^4+A_6 k^6+A_8k^8)\ln\,k+B_2k^2+B_4k^4+B_6k^6+B_8k^8} \label{eq:faxen} \end{equation} with $A_0=0.9156892732$, $A_4= 0.05464866$, $A_6=−0.26462967$, $A_8= 0.792986$, $B_2= 1.26653975$, $B_4=−0.9180433$, $B_6= 1.877101$, and $B_8=−4.66549$ as derived by Fax\'{e}n \cite{faxen1946}. A similar expression for symmetric confinement and $k\lesssim0.5$ has been derived via analytical solution of the Oseen equations by Takaisi \cite{takaisi1956}. For the case of asymmetric confinement ($\delta\ne0$), which has received considerably less attention, a perturbative solution of the biharmonic equation performed by Harper\cite{harper1967} has provided an approximate analytical expression for $\lambda(k,\delta)$ that is only valid for $k\ll1$ (i.e., for $R\ll H$). This work is primarily concerned with flow configurations with arbitrary off-center displacements of the cylinder, $0\leq|\delta|\leq(1-k)/2$, and high confinement ratio, $k \to 1$, where a lubrication flow approximation is valid. \subsection{Hydrodynamic Lubrication Theory \label{sec:lubrication}} For the prediction of drag forces and flow rates via hydrodynamic lubrication theory we will assume Newtonian flow regimes and no slip boundary conditions. Results from numerical solution of the full Navier-Stokes (NS) equations and MD simulations for different configurations ($k\ge0.5$, $\delta \ge 0$, $l>5$) will be compared against the derived analytical expressions. Results from fully atomistic MD simulations will assess the validity of the adopted assumptions in the case of nanoscale flows of simple molecular liquids confined by wettable surfaces that are atomically smooth. The local height along the channel is $h_{\pm}(x)=H$ for $|x|\ge R$ and \begin{equation} h_{\pm}(x)=\left(\frac{1}{2}\pm\delta\right)H-\sqrt{R^2-x^2}~~\mathrm{for}~~|x|\le R, \label{eq:h} \end{equation} where the $(+)$ and $(-)$ signs correspond to the lubrication gaps above and below the cylinder, respectively. While for $\delta=0$ both lubrication gaps are equal, either gap fully closes for $|\delta| = (1-k)/2$. In clearing the cylinder the flow rate $Q$ splits into $Q_-=\alpha(k,\delta) Q$, flowing below the cylinder, and $Q_+=(1-\alpha)Q$ flowing above the cylinder. The split factor $\alpha$ can be determined by equating the pressure drops $\Delta p_-=\Delta p_+=p(R)-p(-R)$ across the bottom and top lubrication gaps, which yields the following equation: \begin{equation} \alpha(k,\delta)\int_{-R}^R \frac{dx}{h^{3}_{-}}=[1- \alpha(k,\delta)] \int_{-R}^R \frac{dx}{h^{3}_{+}}. \label{eq:alfa-dp} \end{equation} While the split factor in Eq.~\ref{eq:alfa-dp} takes the expected value $\alpha(k,0)=1/2$ in symmetric confinement, $\alpha(k,\delta)\to 1$ for $\delta \to (k-1)/2$ (i.e., when closing the top gap) and $\alpha(k,\delta)\to 0$ for $\delta \to (1-k)/2$ (i.e., when closing the bottom gap). After establishing the flow rates $Q_\pm$ via Eq.~\ref{eq:alfa-dp}, it is straightforward to determine the drag coefficient $\lambda(k,\delta)$ using conventional lubrication analysis. The full derivation of the drag coefficient is presented in the Appendix, while the main analytical results are summarized in this section. The drag coefficient predicted via lubrication theory is \begin{equation} \lambda(k,\delta)=\frac{8}{(1-\delta^2)}\{\alpha [f_p(k,\delta)-f_s(k,\delta)]-(1-\alpha)f_s(k,-\delta)\}, \label{eq:lambda} \end{equation} with the flow split factor given by the explicit expression \begin{equation} \alpha(k,\delta)=\frac{1}{1+f_p(k,\delta)/f_p(k,-\delta)}. \label{eq:alpha} \end{equation} The shape functions in Eqs.~\ref{eq:lambda}--\ref{eq:alpha} are \begin{equation} \label{eq:fp} f_p(k,\delta)=\frac{3}{4}\frac{k^2(1/2-\delta)}{b^{5/2}}\left[\frac{\pi}{2}+\mathrm{atan}\left(\frac{k}{2b^{1/2}}\right)\right]+\frac{3k^3}{8b^2}\frac{1}{(1/2-\delta)}+\frac{1}{(1/2-\delta)}\frac{k}{b}, \end{equation} and \begin{equation} \label{eq:fs} f_s(k,\delta)=\frac{k^2}{4b^{3/2}}\left[\frac{\pi}{2}+\mathrm{atan}\left(\frac{k}{2b^{1/2}}\right)\right]+\frac{k}{2b}. \end{equation} Here, the confinement parameter $b=(y_c^2-R^2)/H^2\equiv (1/2-\delta)^2-k^2/4$ is introduced for a more compact definition of the shape functions $f_p$ and $f_s$ accounting for pressure and shear drag contributions, respectively. In the limit $k\to 1$ the dominant contribution in Eq.~\ref{eq:lambda} is due to pressure forces that are proportional to the lubrication parameter $\epsilon^{-\frac{5}{2}}$, here $\epsilon=(H-2R)/2R=(1-k)/k$ is the nondimensional effective gap height. Hence, Eq.~\ref{eq:lambda} predicts two limit cases: \begin{equation} \lambda(k\to 1,0)=\frac{12\pi}{\sqrt{2}}\epsilon^{-\frac{5}{2}} \label{eq:highk0} \end{equation} for cases of symmetric confinement $\delta=0$; and \begin{equation} \lambda(k\to 1,\delta_{max}) =3\pi\epsilon^{-\frac{5}{2}} \label{eq:highkd} \end{equation} for the maximum cylinder displacement $|\delta_{max}|=(k-1)/2$ where one of the lubrication gap is fully closed. Eq.~\ref{eq:lambda} recovers the asymptotic behavior $\lambda\propto\epsilon^{-\frac{5}{2}}$ in lubrication flows as $\epsilon\to 0$ \cite{jeffrey1981,stone2005,ben2004}. Notably, $\lambda(k\to 1,\delta_{max})=\lambda(k\to1,0)/(2\sqrt{2})$ and for high confinement ratios there is significant reduction in the drag coefficient as the cylinder approaches contact with the top or bottom channel wall. Depending on the particular application, either the flow rate $Q$ or induced the driving force $G$ (i.e., pressure differential and body forces) is prescribed. When the flow rate is prescribed knowing the drag coefficient suffices to predict the drag force $D=\mu U \lambda$ where $U=(3Q/2H)(1-\delta^2)$. When the driving force is prescribed, however, it is necessary to predict the volumetric flow rate (per unit width) $Q(k,\delta,l)$ in order to predict the drag force and the hydraulic resistance for different confinement configurations and channel aspect ratios. The lubrication flow approximation yields \begin{equation} Q(k,\delta,l)=\frac{(p_{in}-p_{out})/L + \rho g}{12\mu/H^3} \phi(k,\delta,l) \label{eq:Q} \end{equation} determined by the flow correction factor \begin{equation} \phi(k,\delta,l)=\frac{l}{l +\alpha(k,\delta) f_p(k,\delta) -k}, \label{eq:phi} \end{equation} where $l=L/H$ is the dimensionless channel length, and $\alpha$ and $f_p$ are given by Eqs.~\ref{eq:alpha}--\ref{eq:fp}. According to Eqs.~\ref{eq:Q}--\ref{eq:phi}, the volumetric rate $Q_\infty$ for Poiseuille flow is recovered for $l \to \infty$ and the flow rate vanishes $Q\to0$ for $k\to1$. It is worth noticing that for long but finite channel lengths ($l\gg 1$) the flow rate increases as the cylinder is displaced from the channel centerline ($\delta >0$) according to the ratio \begin{equation} \frac{Q(k,\delta,l)}{Q(k,0,l)}= \frac{l +f_p(k,0)/2-k}{l +\alpha(k,\delta) f_p(k,\delta) -k}. \label{eq:Qratio} \end{equation} A few comments are in order about the expressions derived for the drag coefficient and drag force. For the particular case of symmetrically confined cylinders, predictions from Eqs.~\ref{eq:lambda}--\ref{eq:fs} for the drag coefficient $\lambda(k,0)$ are in close quantitative agreement with asymptotic formulas for $k\to 1$ proposed in previous work \cite{ben2004}. For asymmetrically confined cylinders, Eq.~\ref{eq:lambda} predicts a significant decrease in the drag coefficient $\lambda(k,\delta)$ as $|\delta| \to (1-k)/2$. Moreover, the derived formulas predict a maximum drag force $D$ for symmetric confinement and significant drag reduction in asymmetric confinement with reduction ratios that depend on the dimensionless channel length $l=L/H$. To the best of our knowledge an analytical expressions analogous to Eq.~\ref{eq:lambda} and Eq.~\ref{eq:Q}, valid for cylinders in Poiseuille-type flows for high confinement ratio ($k\gtrsim 0.5$) and arbitrary off-center displacement ($0\le|\delta|\le (k-1)/2$), are not available in the previous literature. \section{Molecular Dynamics Simulation \label{sec:md}} Following standard techniques for non-equilibrium MD simulations \cite{allen1990,frenkel2002}, the interaction between any two atoms of species $s$ and $s'$ is governed by a generalized Lennard-Jones (LJ) potential \begin{equation} \label{eq:LJ} U_{LJ}^{s,s'}(r_{ij})=4\epsilon\left[\left(\frac{\sigma}{r_{ij}}\right)^{12}-A_{ss'}\left(\frac{\sigma}{r_{ij}}\right)^6\right]. \end{equation} Here, $r_{ij}=|{\bf r_i}-{\bf r_j}|$ is the separation between any two atoms ($i,j=1,N$), $\sigma$ is the repulsive core diameter, and $\epsilon$ is the depth of energy potential minimum, which lies at $r_{ij}=(2/A_{ss'})^{1/6}\sigma$. \begin{figure}[h!] \vspace{-0pt} \begin{center} \includegraphics[width=0.6\textwidth]{fig2.pdf} \end{center} \vspace{-0pt} \caption{Geometric setup and flow features in MD simulations. (a) Side and perspective views of the simulation domain and range of dimensions employed; the side view shows the initial fcc atomic lattice with constant spacing $\Delta x=0.8^{-1/3}\sigma$. (b) Dimensionless mass density $\rho/\overline{\rho}$ where $\overline{\rho}=0.8 m/\sigma^3$ is the mean bulk density; solid-like structure and layering of fluid atoms is observed near the solid surfaces. (c) Dimensionless momentum density magnitude $\rho|\bf{u}|/\overline{\rho}|\bf{u}_{0}|$ where $\bf{u}_{0}$ is the maximum velocity in the symmetric confinement case ($\delta=0$). As the bottom gap closes $\delta \to (1-k)/2=0.15$ the flow through the upper gap becomes twice the value observed in symmetric confinement. Reported quantities in (b--c) are obtained via time average and spatial average in the $z$-direction, for $k=0.7$ and $l=5$ in cases of symmetric confinement $\delta=0$ (left panels) and asymmetric confinement at $\delta=0.11$ (right panels). \label{fig:MDsetup}} \end{figure} The simulated system is composed of three atomic species that correspond to the fluid $(s=1)$, cylindrical particle $(s=2)$, and channel walls $(s=3)$ (cf. Fig.~\ref{fig:MDsetup}). The symmetric attraction coefficients $A_{ss'}=A_{s's}$ control the degree of wettability of the modeled solid surfaces and the shear-dependent hydrodynamic slip length; the parametrization employed ($A_{12}=A_{13}=0.8$) produces highly wettable solids exhibiting very small hydrodynamic slip on flat or curved surfaces over a wide range of flow conditions \cite{drazer2002,drazer2005,colosqui2013,razavi2014}. For the simulations in this work, fluid atoms conform dimer molecules bound by Finitely Extensible Nonlinear Elastic (FENE) potentials \begin{equation} \label{eq:FENE} U_{FENE}(r_{ij})=-\frac{1}{2} k_F r_{max}^2 \log\left[ 1- \left(\frac{r_{ij}}{r_{max}}\right)^2 \right], \end{equation} where $k_F$ is the stiffness of the modeled molecular bond and $r_{max}$ adjust its maximum extension. The use of FENE potentials in addition to LJ interactions allows further control of rheological properties and the volatility of the modeled fluid. A Nose-Hoover thermostat maintains constant fluid and solid atoms at temperature $T={\epsilon}/{k_B}$ ($k_B$ is the Boltzmann constant). At initialization the atoms of all species are arranged in face-centered cubic (fcc) lattice with constant spacing $\Delta x=n^{-1/3}$ (see Fig.~\ref{fig:MDsetup}a), where $n=0.8/\sigma^3$ was the number density employed in all MD simulations in this work. The mean mass density of the fluid $\overline{\rho}=0.8 m/\sigma^3$ is constant (here $m$ is the atomic mass), for the modeled conditions the shear viscosity is $\mu=2.9\sqrt{m \epsilon}/\sigma^2$. Non-equilibrium MD simulations are performed to study drag forces and volumetric rates for Poiseuille-type flow past a nanoscale cylindrical particle confined at arbitrary off-center displacements $\delta$. The net force on the cylindrical particle $(s=2)$ is ${\bf F}=- \sum \partial U^{1,2}_{LJ}/\partial {\bf x}$ obtained as the sum of all atomic interactions with the fluid $(s=1)$ only; i.e., direct atomic interactions between the cylindrical particle and the channel walls are neglected in our MD simulations. Different nanochannels with heights $H=26$--$90\Delta x$ and lengths $L=300$--$1000\Delta x$ are employed in MD simulations in order to characterize the drag and flow rates in a range of confinement ratios ($k=$ 0.6--0.84) and channel aspect ratios ($l \simeq$5--38); a constant width $H=10\Delta x$ is employed in all cases. We consider the solid-liquid interface to be located at the zero isopotential contour $U_{LJ}=0$ for the solid species (i.e., particle and walls); the confinement ratio $k$ and dimensionless channel length $l$ in MD simulations were calculated following this criterion. A constant body force $\textbf{f}= m g \textbf{i}$ is applied to each fluid atom in order to drive the flow in the $x$-direction and periodic boundary conditions are applied in the $x$ and $z$ directions (cf. Fig.~\ref{fig:MDsetup}). In all cases the applied driving force produces flows with low Reynolds numbers $Re=\rho U_{max} H/\mu \le 0.3$, where $U_{max}=\rho g H^2/8 \mu$. Distinctive features of the mean mass and momentum density fields simulated via MD are reported in Figs.~\ref{fig:MDsetup}(b--c). Near the liquid-solid interface we observe layering of fluid atoms near the solid surfaces and a small but finite amount of hydrodynamic slip that varies locally. As the cylinder approaches contact with a channel wall (cf. Fig.~\ref{fig:MDsetup}c) the flow through the narrower lubrication gap rapidly decreases as quantitatively predicted by Eq.~\ref{eq:alpha}. \section{Static Cylinders} Predictions from lubrication theory are compared against numerical simulations via finite element solution of the steady-state N-S equations \footnote{The commercial package COMSOL was employed for numerical solution of the Navier-Stokes equations in this work.} and MD techniques described in Sec.~\ref{sec:md}. In our numerical simulations periodic boundary conditions are applied at the channel inlet and outlet, a constant body force in the $x$-direction results in a total force magnitude $G=\rho g (HL-\pi R^2) W$ driving the flow. \begin{figure}[h] \vspace{-0pt} \begin{center} \begin{center} \includegraphics[width=0.8\textwidth]{fig3.pdf} \end{center} \end{center} \vspace{-0pt} \caption{Drag forces and drag coefficients; theoretical predictions (solid/dashed lines), N-S simulation (open markers), and MD simulations (filled markers). (a) Normalized drag force $\bar{D}(k,\delta,l)=D/\mu (3Q_\infty/2H)=\phi \lambda$ for symmetrically confined cylinders ($\delta=0$) versus confinement ratio $k=2R/H$ for $l=L/H=$~5--32. The flow correction factor $\phi(k,\delta,l)$ is given by Eq.~\ref{eq:phi}. Plotted for comparison (in both panels) are predictions for infinitely long channels ($l\to\infty$ and $\phi=1$) based on Faxen's formula (Eq.~\ref{eq:faxen}) and lubrication theory (Eq.~\ref{eq:lambda}) [see legend]. For a finite driving force $G$ in the limit $k\to1$ where $Q\to0$, the drag force becomes equal to the driving force $D=G$ and thus $\bar{D}\to 8 l$ (dashed lines). (b) Drag coefficient $\lambda(k,\delta)=D/\mu U$ where $U=(3Q/2H)(1-\delta^2)$ as a function of $\epsilon=(H-2R)/2R=(1-k)/k$, for $\delta=0$ (i.e., symmetric confinement) and $|\delta|=(1-k)/2$ (i.e., limit case of asymmetric confinement where the cylinder contacts either channel wall). For $\epsilon\to 0$ the drag coefficient exhibits the asymptotic behavior predicted in Eqs.~\ref{eq:highk0}--\ref{eq:highkd} with significant drag reduction in asymmetric confinement. \label{fig:dragforce}} \vspace{-0pt} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=0.8\textwidth]{fig4.pdf} \end{center} \caption{Drag reduction and flow enhancement (i.e., hydraulic resistance reduction) in asymmetric confinement; theoretical predictions (solid lines), N-S simulation (open markers), and MD simulations (filled markers). (a) Drag coefficient ratio $\lambda(k,\delta)/\lambda(k,0)$ as a function of the dimensionless off-center displacement $\delta$ for three different confinement ratios $k$. Significant drag reduction is observed in asymmetric confinement $|\delta|>0$ as the confinement ratio increases ($k\to1$); as expected, agreement between simulations and lubrication theory predictions from Eq.~\ref{eq:lambda} increases for $k\to 1$. (b) Flow rate enhancement $Q(k,\delta,l)/Q(k,0,l)$ as a function of the dimensionless off-center displacement $\delta$ for for three different finite channels [see legend]. The hydraulic resistance of the channels increases when the confined cylinder moves away from the center. \label{fig:lambda}} \end{figure} While physical conditions modeled in (continuum-based) N-S simulations correspond to macroscopic channels where $H/\sigma \gg 1$, the conditions modeled in MD simulations correspond to channels with nanoscale dimensions $H/\sigma=$~20--90 (i.e., $H \simeq$~5--30~nm). The reported drag forces $D$ are obtained by subtracting the cylinder weight and buoyancy force from the total force computed in numerical simulations and the unperturbed velocity $U=(3Q/2H)(1-\delta^2)$ is determined from the numerically computed flow rate (per unit width) $Q$. In the case of MD simulations, reported quantities correspond to averages over sufficiently long times $T_a>0.2 H L/Q$ for which convergence of the reported mean values (within a 10\% deviation) is observed after reaching a steady flow rate. We first analyze the results for the hydrodynamic drag force $D$ on a cylinder in symmetric confinement conditions ($\delta=0$). The predicted drag force is $D(k,0,l)= \mu (3Q_\infty/2H) \phi(k,0,l) \lambda(k,0)$ where $\lambda$ is given by Eq.~\ref{eq:lambda} and $\phi$ is given by Eq.~\ref{eq:phi}; here, $Q_\infty=G H^2/12 \mu L W$ is the flow rate expected for an infinitely long channel ($l \to \infty$) for the finite driving force $G$ applied in numerical simulations. As showed in Fig.~\ref{fig:dragforce}a, theoretical predictions for the drag force as a function of the confinement ratio $k$ in symmetric confinement conditions are in close agreement with both numerical solution of the N-S equations and MD simulations for long channels with dimensionless length $l=L/H=$~5--32. For $k\to 1$, as both (bottom/top) lubrication gaps close and the flow vanishes ($Q\to 0$) the force on the cylinder balances the applied driving force $D(k\to 1, \delta, l)=G$ (cf. \ref{fig:dragforce}). It is worth noticing (cf., Fig.~\ref{fig:dragforce}a) that for moderate confinement ratios ($k\simeq$~0.5--0.7) as the dimensionless channel length increases ($l>10$) the hydrodynamic drag force on the cylinder becomes less than half the force applied to drive the flow ($D/G<0.5$). The drag coefficient $\lambda(k,\delta)=D/\mu U$ predicted by lubrication theory (Eq.~\ref{eq:lambda}) is compared against numerical simulations in Fig.~\ref{fig:dragforce}b for the case of symmetric confinement, where $\delta=0$, and the limit case when the cylinder contacts either one of the channel walls, where $|\delta|=(1-k)/2$. Results from N-S and MD simulation confirm the expected asymptotic behavior $\lambda\propto\epsilon^{-5/2}$, where $\epsilon= (H-2R)/2R$, for $k\to1$ that is predicted by Eqs.~\ref{eq:highk0}--\ref{eq:highkd}. In the limit case $|\delta|=(1-k)/2$ where the cylinder contacts one of the channel walls there is a reduction of about 65\% with respect to the drag coefficient in symmetric confinement (cf. Fig.~\ref{fig:lambda}b). Numerical solution of the N-S equations and MD simulations confirm a gradual reduction in the drag coefficient as the dimensionless off-center displacement $\delta$ increases (cf. Fig.~\ref{fig:lambda}a,). The drag coefficient ratio $\lambda(k,\delta)/\lambda(k,0)$ quantifying the reduction of drag in asymmetric confinement as a function of $\delta$ is reported in Fig.~\ref{fig:lambda}b for three different confinement ratios $k=0.6, 0.69, 0.84$. As expected the agreement between lubrication theory and numerical simulations improves for large confinement ratios ($k\gtrsim0.7$). Notably, drag coefficients computed from MD simulations of flows having molecularly thin lubrication gaps (i.e., three to ten atomic layers thin) are in good agreement with numerical solutions of the full Navier-Stokes equations and analytical predictions from lubrication theory adopting no-slip boundary conditions (cf. Fig.~\ref{fig:lambda}a). The lubrication analysis in Sec.~\ref{sec:lubrication} also predicts a gradual reduction in the hydraulic resistance of the channel as the off-center displacement of the confined cylinder increases. This effect is observed in simulations as an enhancement in the flow rate $Q$ for a prescribed driving force $G$ as the dimensionless off-center displacement $\delta$ increases. As showed in Fig~\ref{fig:lambda}b, theoretical predictions from Eq.~\ref{eq:Qratio} for the flow enhancement ratio $Q(k,\delta,l)/Q(k,0,l)$ are in close agreement with numerical simulations for different confinement ratios ($k=$~0.6, 0.69, 0.84) and finite channels with different lengths ($l=$~5.6, 33, 38). Simulations confirm predictions of Eq.~\ref{eq:Qratio}, the flow enhancement for asymmetrically confined cylinders increases for large confinement ratios ($k\to1$) and decreases for long channels ($l\to\infty$). \section{Colloidal cylinders} The lubrication analysis in Sec.~\ref{sec:lubrication} produced analytical predictions for the position-dependent drag force assuming flow past a perfectly static cylinder. This section discusses the application of hydrodynamic lubrication to confined colloidal cylinders that undergo Brownian motion and can be strongly influenced by colloidal interactions (e.g., van der Waals attraction, steric repulsion, oscillatory structural forces). The employed approach is generally applicable to colloidal particles, whether these are freely convected or bound to an equilibrium position by diverse restoring forces. The formulas presented in this section, via invoking analytical predictions from Sec.~\ref{sec:lubrication}, aim to predict the mean (noise-averaged) drag force experienced by nanobeams, nanowires, or colloidal probes that constitute a key component of NEMS and nanowire-based sensors and actuators For overdamped Brownian motion (i.e., neglecting inertial and memory effects), the mean drag force $\langle D \rangle$ on a confined colloidal particle can be estimated by ensemble averaging the (position-dependent) drag for static conditions over the sequence of random displacements induced by thermal motion. Since the drag predicted under static conditions varies only in the vertical direction ($y$-direction), uncorrelated thermal motion in the flow direction ($x$-direction) is not expected to affect the mean drag force. Vertical random displacements can be statistically described by a probability density $\varrho_o(\delta,t)\equiv\varrho(\delta,t| \delta_0,t_0)$; here $\delta_0$ is the initial displacement at time $t_0$ where $\varrho_o(\delta,t_0)=\delta(\delta-\delta_0)$. Hence, the mean drag force expected for overdamped Brownian motion is \begin{equation} \langle D(k,l,t) \rangle= \frac{3\mu}{2H} \int_{-\delta_{max}}^{+\delta_{max}} Q(k,\delta,l) \lambda(k,\delta) \varrho_o(\delta,t) d\delta, \label{eq:Dmean} \end{equation} where $\lambda(k,\delta)$ is the drag coefficient (Eq.~\ref{eq:lambda}) derived for static conditions and $\delta_{max}=(1-k)/2$ as before. Similarly, it is useful to define a noise-averaged drag coefficient \begin{equation} \langle \lambda(k,t) \rangle = \int_{-\delta_{max}}^{+\delta_{max}} \lambda(k,\delta) \varrho_o(\delta,t) d\delta, \label{eq:Lmean} \end{equation} in order to characterize the mean drag force $\langle D(k,l,t) \rangle$ for the case where the flow rate $Q$ is prescribed; this case also corresponds to prescribing the driving force in sufficiently long channels ($l\to\infty$) for which the flow correction factor (Eq.~\ref{eq:phi}) becomes unity $\phi(k,\delta,l\to\infty) = 1$. Via MD simulations we analyze the case of a confined colloidal cylinder immersed in a fluid with constant thermal energy $k_B T$. A linear restorative force $F_s=K_s \sqrt{(x-L/2)^2+(y-H/2)^2}$ is applied to bring the colloidal cylinder to equilibrium at the center of the channel where $\delta=0$; the ``spring'' constant is varied in the range $K_s$~=~0--1~$k_B T/\sigma^2$ in order to modulate the root-mean-square (rms) amplitude of the dimensionless off-center displacement $\delta_{rms}(t)=\sqrt{\langle(\delta(t)-\delta_0)^2\rangle}$. The mean drag and drag coefficient defined in Eqs.~\ref{eq:Dmean}--\ref{eq:Lmean} are expected to depend on the dimensionless rms displacement $\delta_{rms}=\delta_{rms}$ observed in MD simulations. In order to produce analytical predictions we will assume that the colloidal cylinder follows an Ornstein-Uhlenbeck process and thus \begin{equation} \varrho_o(\delta,t)=\frac{1}{\sqrt{2\pi}\delta_{rms}(t)} \exp\left\{-\left[\frac{\delta-\delta_0 \exp(t/\tau)}{\sqrt{2}\delta_{rms}(t)}\right]^2\right\}, \label{eq:gaussian} \end{equation} where $t_0=0$ and $\tau=\tilde{D}/K_s$ is an unknown effective diffusion coefficient. Since boundary effects are neglected, the probability in Eq.~\ref{eq:gaussian} is only valid for small rms displacements $\delta_{rms}(t)\ll\delta_{max}=(1-k)/2$. For finite values $K_s$ there is long-time limit $t>>\tau$ where initial condition is forgotten and the probability in Eq.~\ref{eq:gaussian} becomes $\varrho(\delta,t)=\exp[-(\delta/\sqrt{2}\delta_{rms})^2]/\sqrt{2\pi}\delta_{rms}$. \begin{figure}[h] \begin{center} \includegraphics[width=0.85\textwidth]{fig5.pdf} \end{center} \caption{Drag reduction induced by thermal motion and structural forces for a colloidal cylinder symmetrically confined in a slit channel. (a--c) Mean drag coefficient reduction $\langle\lambda(k,\delta_{rms})\rangle/\lambda(k,\delta_{rms})$ versus rms off-center displacement. Plotted lines indicate predictions from Eq.~\ref{eq:Lmean} for $k=0.64,0.78, 0.84$. Filled markers correspond to results from MD simulations: (a) ($\blacktriangle$) $k=0.64$, $l=3.56$ ($x_{rms}$=0); (b) ($\bullet$) $k=0.74$, $l=3.5$, ($\blacktriangleright$) $k=0.74$, $l=3.5$ ($x_{rms}$=0), ($\blacksquare$) $k=0.78$, $l=5$; (c) ($\blacklozenge$) $k=0.82$, $l=5$, ($\blacktriangledown$) $k=0.84,$ $l=5$ ($x_{rms}$=0), ($\bigstar$) $k=0.86$, $l=3$. (d) Time evolution of rms displacement versus dimensionless time $t/T_D$ ($T_D=\mu k H^3/2k_B T$) for MD simulations in panel (a); solid line is an exponential fit, open markers correspond to values computed from MD simulation. (e) Vertical off-center displacement in lattice units $\Delta x$ versus dimensionless time $t/T_D$ for MD simulations in panel (a); six different realizations showing metastable states. (f) Stationary probability distribution $\varrho(\delta)$ computed from absolute value of displacement-time trace in panel (e) via ensemble average over six realizations. (g) Free energy $U(\delta)$ computed from probability distribution in panel (f). \label{fig:thermal} } \end{figure} Predictions for the mean drag coefficient (Eq.~\ref{eq:Lmean}) via adopting the probability density in Eq.~\ref{eq:gaussian} for $t>>\tau$ and $\delta_{rms}\ll\delta_{max}$ are compared in Fig.~\ref{fig:thermal} against results from MD simulations for colloidal cylinders symmetrically confined in nanoscale channels of various heights $H=$~30--90$\sigma$. For the MD simulations reported in Fig.~\ref{fig:thermal}a the cylinder is allowed to ``freely'' drift in the vertical direction while the motion is prescribed in the $x$-direction; the expected rms vertical displacement is $\delta_{rms}=2\tilde{D}t$ (for $t>\tau$) and the (top/bottom) lubrication gaps in these MD simulations become as small as two atomic diameters. For the MD simulations reported in Figs.~\ref{fig:thermal}b--c a restorative force with different strengths ($K_s$~=~0.1--1~$k_B T/\sigma^2$) is applied; in this case the maximum rms displacement is bounded, $\delta_{rms}=\sqrt{k_B T/K_s}$, and lubrication gaps in this case are always larger than five atomic diameters. In all cases, MD simulations (cf. Fig.~\ref{fig:thermal}) report a decay in the mean drag force as the rms displacement $\delta_{rms}$ increases, in close agreement with predictions from Eq.~\ref{eq:Lmean}. However, the rms displacements reported in Figs.~\ref{fig:thermal}a--c as computed from MD simulations can become significantly smaller than the expected values for diffusion in homogeneous fluid media when the cylinder reaches within three atomic layers from the channel walls. Moreover, for the case where the cylinder is free to drift vertically we observe a slow exponential relaxation determined by a diffusive time $\tau\sim T_D=\mu k H^3/2k_B T$ (see Fig.~\ref{fig:thermal}d). In fact, when a lubrication gap becomes thinner than five atomic layers the displacement-time trace of the cylinder center-of-mass exhibits long-lived metastable states (cf. Fig.~\ref{fig:thermal}d) that indicate the local energy minima that lie at average separation $\Delta x=(\rho/m)^{-1/3}$ (cf. Fig.~\ref{fig:thermal}e). Assuming Boltzmann statistics, the stationary probability distribution ($t\to\infty$) is $\varrho(\delta)=Z^{-1} \exp[-U(\delta)/k_B T]$, where $U(\delta)$ is the (space-dependent) free energy and $Z$ is the proper normalization constant. A strongly non-Gaussian probability distribution computed from the displacement-time trace reveals an oscillatory free energy $U(\delta)=-k_B T\log(\varrho) + const.$ that decays away from the walls. This observation explains the poorer agreement observed in Fig.~\ref{fig:thermal}a between MD simulations and predictions adopting a Gaussian probability in Eq.~\ref{eq:gaussian} valid for free Brownian motion in the long-time limit $t\gg T_D$. Given that our MD simulations do not include atomic interactions between solid atoms, the oscillatory free energy variations are attributed to the structural rearrangement of fluid layers caused by the cylinder motion. Hence, the modeled steric and van der Waals interactions between solid and fluid atoms induced significant energy barriers ($\Delta U=5 k_B T$) and long-lived metastable states when the cylinder is close to the wall. \section{Conclusions} A hydrodynamic lubrication approach was presented to predict drag forces and volumetric rates for plane Poiseuille flow past a confined static cylinder as a function of the confinement ratio $k$, the dimensionless off-center displacement of the cylinder $\delta$, and the dimensionless channel length $l$. Analytical expressions for the drag coefficient introduced in this work are valid for moderate to large confinement ratios ($k\gtrsim 0.5$) and arbitrary off-center displacements ($0\le|\delta|\le(1-k)/2$). In the high confinement limit $k\to 1$ the derived expressions recover the asymptotic behavior reported in previous works\cite{ben2004,semin2009}. The set of derived formulas applies to cases when either the volumetric flow rate or the driving force is prescribed. In addition, the derived expressions valid for finite channels are suitable for predicting drag forces and volumetric rates for flow past periodic arrays of cylinders. As the cylinder moves away from the channel centerline and one of the lubrication gap closes, either above or below the cylinder, the flow through the closing gap vanishes and so does its contribution to the drag force. The derived expressions quantitatively predict that (i) drag forces and drag coefficients have their maximum value in symmetric confinement ($\delta=0$), and (ii) there are significant reductions in both the drag force and drag coefficient in asymmetric confinement, as the cylinder approaches either one of the channel walls ($|\delta| \to 1/2-k/2$). Conversely, the hydraulic resistance for a given confinement ratio $k$ is minimized when the cylinder contacts a wall and $|\delta|=1/2-k/2$. In the case of static cylinders, analytical predictions for the drag force and flow rates are in good agreement with numerical solutions of the Navier-Stokes equations and fully-atomistic MD simulations of nanoscale channels. Notably, conventional hydrodynamic descriptions adopting no-slip boundary conditions produced reliable predictions despite the presence of significant steric effects and structural forces observed in fully atomistic simulations with wettable solids. The results in this work indicate that hydrodynamic lubrication theory can produce reasonable predictions for molecularly thin lubrication gaps (i.e. down to three atomic layers) in the case of Poiseuille-type flows on plane channels with surfaces that are molecularly smooth and highly wettable by simple molecular liquids. In fact, MD simulations and continuum models (i.e., lubrication theory and numerical solution of the N-S equations) reported comparable values of the drag force when one of the lubrication gaps became vanishingly small. The observed agreement, however, can be attributed to the fact that the flow through the narrowest lubrication gap decreases, as quantitatively predicted by equating the pressure drop through each gap, and the dominant contribution to the drag force comes from the widest lubrication gap, which in all studied cases remained thicker than two atomic layers. The MD simulations reported a small hydrodynamic slip that depended on the local surface curvature and shear rate magnitude but this effect did not affect significantly the agreement with analytical predictions adopting no slip boundary conditions. The presented lubrication analysis can be readily extended to systems with partially wettable solids where significant hydrodynamic slip is present, provided that the slip length is a known parameter. The studied case of nanoscale cylinders undergoing thermal motion revealed a few important effects. For symmetrically confined colloidal cylinders the mean (noise-average) drag force, determined via ensemble or time average, can be significantly lower than the drag force predicted for a static cylinder. The mechanism for the predicted drag reduction is not attributed to hydrodynamic slip but rather to the colloidal cylinder randomly moving to off-center positions where the drag predicted in static conditions is significantly lower. Similar thermally-induced effects produce a noticeable reduction in the mean hydraulic resistance as the rms displacement increases. For creeping flows and after assuming a Gaussian probability density for the thermally-induced displacements of the colloidal cylinder, the reduction in the mean drag and hydraulic resistance can be quantitatively predicted by averaging the position-dependent drag and flow rate derived in static conditions. The observed effects can be enhanced by increasing the rms amplitude of the cylinder displacement via different mechanisms, which can include mechanical or acoustic actuation and/or increasing the fluid temperature. Under studied conditions, the presence of significant structural forces was found to be the major obstacle to safely extending hydrodynamic lubrication theory to nanoscale flows in plane channels. Analytical or numerical solution of a Fokker-Planck equation can predict the probability density of random thermal displacements but this will require a priori knowledge of local free energy variations for a confined colloidal cylinder. Although structural forces did not play a significant role when the cylinder position was prescribed, oscillatory structural forces induced strongly non-Gaussian probability densities and long-lived metastable positions of the cylinder at integer number of atomic layers from the channel wall. The analysis and results presented in this work are relevant to the design of NEMS and nanowire-based sensors and actuators, nanofluidic devices for transport and separation of nanoparticles or macromolecules, and can potentially guide experimental studies of the nanorheology of confined fluids using colloidal probes. \begin{acknowledgments} The authors would like to thank Antonio Checco, Joel Koplik, and Yongsheng Leng for useful discussions. This work was supported by the SEED Grant Program by The Office of Brookhaven National Laboratory (BNL) Affairs at Stony Brook University. Part of the MD simulations in this work employed computational resources from the Center for Functional Nanomaterials at BNL, which is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-SC0012704. \end{acknowledgments}
{ "timestamp": "2015-04-14T02:10:43", "yymm": "1504", "arxiv_id": "1504.03050", "language": "en", "url": "https://arxiv.org/abs/1504.03050" }
\section{Introduction% \label{introduction}% } Consider a~polynomial differential equation in $\mathbb{C}^{2}$ (with complex time), \begin{equation} \begin{aligned} {\dot{x}} &= P(x,y),\\ {\dot{y}} &= Q(x,y), \end{aligned} \phantomsection \label{polynomial-vf} \end{equation} where $\max (\deg P, \deg Q) =n$. The splitting of $\mathbb{C}^{2}$ into trajectories of this vector field defines a singular analytic foliation of $\mathbb{C}^{2}$. Denote by $\mathcal{A}_{n}$ the space of foliations of $\mathbb{C}^{2}$ defined by vector fields \eqref{polynomial-vf} of degree at most $n$ with coprime $P$ and $Q$. Two vector fields define the same foliation if they are proportional, hence $\mathcal{A}_{n}$ is a Zariski open subset of the projective space. $\mathcal{A}_{n}$ is equipped with a natural topology induced from this projective space. In 1970-s there appeared several results on the properties of generic foliations from $\mathcal{A}_{n}$. In particular, Yu. Ilyashenko \cite{Il78} proved that a generic foliation (more precisely, each foliation from some full Lebesgue measure subset of $\mathcal{A}_{n}$) has an infinite number of limit cycles. Later his theorem was improved by E. Rosales-González, L. Ortiz-Bobadilla and A. Shcherbakov \cite{SRO98}, namely they replaced a full-measure set by an open dense subset. \begin{definition} \emph{Limit cycle} on a leaf $L$ of~an~analytic foliation is an element $[\gamma ]$ of the free homotopy group of $L$ such that the holonomy along (any) its representative $\gamma $ is non-identical. \end{definition} \begin{definition} A~set of limit cycles of a~foliation is called \emph{homologically independent}, if for any leaf $L$ all the cycles located on this leaf are linearly independent in $H_{1}(L)$. \end{definition} \begin{theorem*}[\cite{Il78}] For $n\geqslant 2$, there exists a full-measure subset of $\mathcal{A}_{n}$, such that each foliation from this subset possesses an infinite number of homologically independent limit cycles. \end{theorem*} \begin{theorem*}[\cite{SRO98}] For $n\geqslant 3$, there exists an open dense subset of $\mathcal{A}_{n}$, such that each foliation from this subset possesses an infinite number of homologically independent limit cycles. \end{theorem*} The proof of the first theorem in \cite{Il78} is rather technical; the proof of the second one in \cite{SRO98} contains about 10 pages of cumbersome estimates of integrals along the limit cycles. The constructed sequence of representatives $\gamma _j$ of required limit cycles $[\gamma _j]$ in both theorems converges to the infinite line. Our results yield another, less technical proof of these theorems, and our limit cycles are detached from the infinite line. Also, our proof works for $n=2$ for both types of genericity assumptions. \begin{maintheorem*}[{}]\phantomsection\label{main-theorem} For $n\geqslant 2$, there exist \begin{itemize} \item a full-measure subset $\mathcal{A}^{LC1}_n\subset \mathcal{A}_n$, \item a complement to a real-analytic subset $\mathcal{A}^{LC2}_n\subset \mathcal{A}_n$, \end{itemize} such that each $\mathcal{F}\in \mathcal{A}_n^{LC1}\cup \mathcal{A}_n^{LC2}$ possesses an infinite sequence of limit cycles $[\gamma _j]$ such that: \newcounter{listcnt0} \begin{list}{\alph{listcnt0})} { \usecounter{listcnt0} \setlength{\rightmargin}{\leftmargin} } \item the cycles are homologically independent; \item the multipliers of the cycles tend to zero; \item the cycles are uniformly bounded, i.e., there exists a ball in $\mathbb{C}^{2}$ that includes all representatives $\gamma _j$; \item there exists a cross-section such that $\gamma _j$ intersect it in a dense subset. \end{list} The explicit descriptions of the sets $\mathcal{A}^{LC1}_n$ and~$\mathcal{A}^{LC2}_n$ are given below, in Sections “\hyperref[multiplicative-density]{Multiplicative density}” and “\hyperref[unsolvable-monodromy-group]{Unsolvable monodromy group}”, respectively. \end{maintheorem*} The key genericity assumption for $\mathcal{A}^{LC1}_n$ is that the characteristic numbers of two singular points at infinity generate a dense semi-group in $\mathbb{C}/\mathbb{Z}$. The key genericity assumption for $\mathcal{A}^{LC2}_n$ is that the monodromy group at infinity is unsolvable. Though the exceptional set in the second part is much thinner, we include the first part for two reasons: $\mathcal{A}^{LC2}_n$ does not include $\mathcal{A}^{LC1}_n$, and the first case is technically easier. Our construction also yields that the infinite number of limit cycles survives in a neighbourhood of $\mathcal{A}^{LC1}_n\cup \mathcal{A}^{LC2}_n$ in the space $\mathcal{B}_{n+1}$ of foliations of $\mathbb{C}P^{2}$ that are given by a~polynomial vector field of degree at~most $n+1$ in any affine chart, see \hyperref[cor-nbhd]{Corollary 6}. \section{Preliminaries% \label{preliminaries}% } \subsection{Extension to infinity% \label{extension-to-infinity}% } Let us extend a polynomial foliation $\mathcal{F}\in \mathcal{A}_n$ given by \eqref{polynomial-vf} to $\mathbb{C}P^{2}$. After changing variables, $u=\frac 1 x, v = \frac y x$, and the time change $d\tau = -u^{n-1} dt$, the vector field takes the form \begin{equation} \begin{aligned} {\dot{u}} &= u {\widetilde{P}} (u,v)\\ {\dot{v}} &= v{\widetilde{P}} (u,v) - {\widetilde{Q}}(u,v) \end{aligned} \phantomsection \label{vf-uv} \end{equation} where ${\widetilde{P}} (u,v)= P\left(\frac 1 u, \frac v u\right) u^n$ and ${\widetilde{Q}} (u,v)= Q\left(\frac 1 u,\frac v u\right) u^n$ are two polynomials of degree at most $n$. Since ${\dot{u}}(0, v)\equiv 0$, the infinite line $\set{u=0}$ is invariant under this vector field. Denote by $h(v)$ the polynomial ${\dot{v}}(0, v)=v{\widetilde{P}} (0,v) - {\widetilde{Q}}(0,v)$. In a generic (\emph{non-dicritical}) case $h(v)\not\equiv 0$; then \eqref{vf-uv} has isolated singular points $a_j\in \set{u=0}$ at the roots of $h$, and $L_\infty ≔ \set{u=0}\smallsetminus \{a_{1},a_{2},\ldots \}$ is a leaf of the extension of $\mathcal{F}$ to $\mathbb{C}P^{2}$. Denote by $\mathcal{A}_{n}'$ the set of foliations $\mathcal{F}\in \mathcal{A}_{n}$ such that $h$ has $n+1$ distinct roots $a_j$, $j=1,\ldots , n+1$. In particular, all these foliations are non-dicritical. For each $j$, let $\lambda _j$ be the ratio of the eigenvalues of the linearization of \eqref{vf-uv} at $a_j$ (the eigenvalue corresponding to $L_\infty $ is in the denominator). One can show that $\sum \lambda _j=1$, and this is the only relation on $\lambda _j$. For $\mathcal{F}\in \mathcal{A}_n'$, fix a non-singular point $O\in L_\infty $ and a cross-section $S$ at $O$ given by $v=\const$. Let $\Omega L_\infty $ be the loop space of $(L_\infty , O)$, i.~e., the space of all continuous maps $(S^{1}, pt)\rightarrow (L_\infty , O)$. For a loop $\gamma \in \Omega L_\infty $, denote by $\mathbf{M}_\gamma :(S, O)\rightarrow (S, O)$ the monodromy map along $\gamma $. It is easy to see that $\mathbf{M}_\gamma $ depends only on the class $[\gamma ]\in \pi _{1}(L_\infty , O)$, and the map $\gamma \mapsto \mathbf{M}_\gamma $ reverses the order of multiplication, \begin{equation*} \mathbf{M}_{\gamma \gamma '}=\mathbf{M}_{\gamma '}\circ \mathbf{M}_\gamma . \end{equation*} The set of all possible monodromy maps $\mathbf{M}_\gamma $, $\gamma \in \Omega L_\infty $, is called the \emph{monodromy pseudogroup} $G=G(\mathcal{F})$. The word “pseudogroup” means that there is no common domain where all elements of $G$ are defined. However we will follow the tradition and write “monodromy group” instead of “monodromy pseudogroup”. Choose $n+1$ loops $\gamma _j\in \Omega L_\infty $, $j = 1,2,\ldots ,n+1$, passing around points $a_j$, respectively. We suppose that $\gamma _j$ are simple and intersect only at $O$. Then the pseudogroup $G(\mathcal{F})$ is generated by monodromy maps $\mathbf{M}_j=\mathbf{M}_{\gamma _j}$. It is easy to see that the multipliers $\mu _j=\mathbf{M}_j'(0)$ are equal to $\exp {2 \pi i \lambda _j}$. \subsection{Fatou coordinates% \label{fatou-coordinates}% } The space of germs of analytic parabolic maps $g:(\mathbb{C}, 0)\rightarrow (\mathbb{C}, 0)$, $z\mapsto z+o(z)$, has a~natural filtration by the degree of the leading term of $g(z)-z$. Denote by $A_p$ the set of germs of the form $z\mapsto z+az^{p+1}+o(z^{p+1})$, $a\neq 0$. In this section we will recall some results on sectorial rectifying charts of parabolic fixed points that will be used in the article. For a~more complete exposition, see, e.g., Chapter IV of \cite{IYa07}. We start with describing the formal normal forms for quadratic parabolic germs. \begin{theorem*}[{}] A~quadratic parabolic germ $f:z\mapsto z+az^{2}+bz^{3}+o(z^{3})$ is formally conjugated to the time-one flow map~$f_\lambda $ of the vector field~$v_\lambda (z) = \frac{z^{2}}{1+\lambda z}$, where~$\lambda =1-\frac{b}{a^{2}}$. More precisely, there exists a~formal series $H(z)=az+\sum _{k=2}^\infty h_kz^k$, such that $f_\lambda \circ H=H\circ f$. The series~$H$ is uniquely defined modulo a~formal composition with a~flow map of~$v_\lambda $. \end{theorem*} \DUadmonition[note]{ \DUtitle[note]{Note} It is easy to see that the map $t_\lambda :z\mapsto -\frac 1z+\lambda \log z$ conjugates $f_\lambda $ to the map $t\mapsto t+1$, $t_\lambda (f_\lambda (z))=t_\lambda (z)+1$. } We will need the following theorem that describes sectorial rectifying charts for quadratic parabolic germs. Consider the following sectors \begin{equation*} \begin{aligned} S_{\alpha ,r}^{+}&=\Set{z|\relax |z|<r, |\arg z|<\alpha },&S_{\alpha ,r}^{-}&=\Set{z|\relax |z|<r, |\arg z-\pi |<\alpha }. \end{aligned} \end{equation*}\begin{theorem*}[{Sectorial Normalization Theorem}]\phantomsection\label{sectorial-normalization-theorem} Let $f:z\mapsto z+az^{2}+o(z^{2})$ be a~quadratic parabolic map, let $H(z)=az+\sum _{k=2}^\infty h_kz^k$ be a~formal series which conjugates $f$ to its formal normal form $f_\lambda $. Then for any $\frac \pi 2<\alpha <\pi $ there exists $r>0$ and a~unique couple of~analytic maps $h^\pm :\frac{1}{a}S^\pm _{\alpha ,r}\rightarrow \mathbb{C}$ with the following properties: \begin{itemize} \item $H$ is an~asymptotic series for $h^{-}$ and $h^{+}$: for $N\in \mathbb{N}$, we have $h^\pm (z)=az+\sum _{k=2}^N h_kz^k+o(z^N)$ as $z\rightarrow 0$ inside $\frac 1aS^\pm _{\alpha ,r}$; \item $h^\pm $ conjugates $f$ to $f_\lambda $: $f_\lambda \circ h^\pm =h^\pm \circ f$. \end{itemize} \end{theorem*} \DUadmonition[note]{ \DUtitle[note]{Note} For most parabolic germs $f$, $h^{-}\neq h^{+}$. So, the analytic classification of parabolic germs does not coincide with their formal classification. The analytic classification has functional moduli called \emph{Ecalle–Voronin moduli}, namely the restrictions of $(h^{+})^{-1}\circ h^{-}$ to the sectors $\set{z|\relax |z|<r, \pi -\alpha <\arg z<\alpha }$ and $\set{z|\relax |z|<r, -\alpha <\arg z<\pi +\alpha }$ up to a conjugation by a~flow map of $v_\lambda $. } \DUadmonition[note]{ \DUtitle[note]{Note} It is easy to check that the image of $h^\pm $ includes a~sector of the form $S^\pm _{\alpha ', r'}$ for each $\alpha '<\alpha $ and some $r'$, and is included by another sector of the same form. Also, $t_\lambda (S^{-}_{\alpha ',r'})$ includes a~\emph{sector at~infinity}: \begin{equation} S_{\beta ,R}^\infty =\Set{\zeta |\relax |\zeta |>R, |\arg \zeta |<\beta } \phantomsection \label{sector-at-infinity} \end{equation} for each $\beta <\alpha '$ and some $R=R(\alpha ', r', \lambda , \beta )\gg 1$. Thus the image of $\zeta =t_\lambda \circ h^{-}$ includes a~sector at~infinity. } \begin{definition} A \emph{Fatou coordinate} for a~parabolic map $f$ in a~sector $\frac 1a S^{-}_{\alpha ,r}$ is a~coordinate of the form $\zeta =t_\lambda \circ h^{-}$, where $h^{-}$ is given by \hyperref[sectorial-normalization-theorem]{Sectorial Normalization Theorem}. A~Fatou coordinate $\zeta $ conjugates $f$ to the shift $\zeta \mapsto \zeta +1$ in a domain that includes a sector at infinity \eqref{sector-at-infinity}, and is defined uniquely modulo addition of~a~complex number. \end{definition} We shall need the following statement. \begin{lemma}\phantomsection\label{inzetachart} Let $g$ be a~parabolic map of the form $z\mapsto z+az^2+\ldots $. Let $\zeta $ be a Fatou chart for~$g$ defined in a~sector $\frac 1aS^{-}_{\alpha ,r}$. Let $S^\infty $ be the image of a smaller sector $\frac 1aS^{-}_{\alpha -\varepsilon ,r-\varepsilon }$ under $\zeta $. Let $F:\mathbb{C}\rightarrow \mathbb{C}$ be an~analytic map, $F(0)=0$, defined in the chart $z$. Let ${\tilde{F}}=\zeta \circ F\circ \zeta ^{-1}$ be the corresponding map in the chart $\zeta $. \setcounter{listcnt0}{0} \begin{list}{\alph{listcnt0})} { \usecounter{listcnt0} \setlength{\rightmargin}{\leftmargin} } \item If $F(z)=kz+o(z)$, then ${\tilde{F}}(\zeta )=k^{-1}\zeta +c+o(1)$ as $\zeta \rightarrow \infty $ inside $S^\infty $. \item If~$F(z)=z+kz^{p+1}+ o(z^{p+1})$, $p\geqslant 1$, then ${\tilde{F}}(\zeta )=\zeta +(-1)^{p-1}ka^{-p}\frac{1}{\zeta ^{p-1}} +o\left(\frac 1{\zeta ^{p-1}}\right)$ in $\zeta $-chart as $\zeta \rightarrow \infty $ inside $S^\infty $. \item If $F$ is a~parabolic map, then $\log {\tilde{F}}'(\zeta )=o(F(\zeta )-\zeta )$ as $\zeta \rightarrow \infty $ inside $S^\infty $. \end{list} \end{lemma} \begin{proof} Recall that $h(z)-az=O(z^{2})$, $z\in \frac 1a S^{-}_{\alpha ,r}$. The Cauchy estimates imply that $h'(z)=a+O(z)$ in $\frac 1a S_{\alpha -\varepsilon ,r-\varepsilon }$. Let us prove b). Note that \begin{align*} F(h^{-1}(w))&=h^{-1}(w)+k(h^{-1}(w))^{p+1}+o((h^{-1}(w))^{p+1})\\ &=h^{-1}(w)+ka^{-p-1}w^{p+1}+o(w^{p+1}), \end{align*} hence \begin{equation*} (h\circ F\circ h^{-1})(w)=w+\int _{h^{-1}(w)}^{F(h^{-1}(w))}a+O(z)\,dz=w+ka^{-p}w^{p+1}+o(w^{p+1}). \end{equation*} Similarly, for $\zeta =t_\lambda (w)$ we have \begin{align*} {\tilde{F}}(\zeta )&=\zeta +\int _w^{w+ka^{-p}w^{p+1}+o(w^{p+1})}t_\lambda '(\omega )\,d\omega \\ &=\zeta +\int _w^{w+ka^{-p}w^{p+1}+o(w^{p+1})}\frac 1{\omega ^{2}}+\frac \lambda \omega \,d\omega \\ &=\zeta +ka^{-p}w^{p-1}+o(w^{p-1})\\ &=\zeta +(-1)^{p-1}ka^{-p}\frac{1}{\zeta ^{p-1}} +o\left(\frac 1{\zeta ^{p-1}}\right) \end{align*} Assertion a) can be proved in the same way. Finally, the last assertion follows from Assertion b) in $\zeta \left(\frac 1aS^{-}_{\alpha -\frac \varepsilon 2,r-\frac \varepsilon 2}\right)$ and Cauchy estimates. \end{proof} \subsection{Unsolvability of the monodromy group% \label{unsolvability-of-the-monodromy-group}% } In \cite{Shch84}, A. Shcherbakov proved that for a generic foliation $\mathcal{F}\in \mathcal{A}_{n}$, the monodromy group is unsolvable. It turns out that a~group of~germs $(\mathbb{C}, 0)\rightarrow (\mathbb{C}, 0)$ is unsolvable if and only if it contains a~couple of~commutators that do not commute with each other. This follows from the next lemma. \begin{lemma}\phantomsection\label{lemma-parabolic-commutator} Let $f(z)=z+az^{p+1}+o(z^{p+1})$ and $g(z)=z+bz^{q+1}+o(z^{q+1})$ be two parabolic germs. Then \begin{equation*} [f, g](z)≔(f\circ g\circ f^{-1}\circ g^{-1})(z)=z+ab(p-q)z^{p+q+1}+o(z^{p+q+1}). \end{equation*} In particular, if $p\neq q$, $a\neq 0$, $b\neq 0$, then $[f, g]\in A_{p+q}$. \end{lemma} \begin{corollary*}[{}] If a~group~$G$ of~germs $(\mathbb{C}, 0)\rightarrow (\mathbb{C}, 0)$ contains two parabolic germs $g_{1}\in A_p$, $g_{2}\in A_q$ with $p\neq q$, then $G$ is unsolvable. \end{corollary*} \begin{proof} Indeed, none of the commutators $g_{3}=[g_{1}, g_{2}]\in A_{p+q}$, $g_{4}=[g_{3}, g_{2}]\in A_{p+2q}$, … can be the identity map. \end{proof} The main result of this section is the following lemma. \begin{lemma}\phantomsection\label{unsolvable-details} There exists an~open dense subset $\mathcal{A}_{n}$ such that for each foliation from this subset \begin{equation*} \forall i\neq j\quad[\mathbf{M}_i,\mathbf{M}_j]\in A_{1},\quad [\mathbf{M}_i^{-1},\mathbf{M}_j]\in A_{1},\quad [[\mathbf{M}_i,\mathbf{M}_j],[\mathbf{M}_i^{-1},\mathbf{M}_j]]\in A_{3}. \end{equation*}\end{lemma} This lemma is immediately implied by the following two statements. \begin{lemma}\phantomsection\label{scherbakov-unsolvable} There exists a~real analytic subset $\mathcal{E}\subset \mathcal{A}_n$ of~positive codimension such that for $\mathcal{F}\notin \mathcal{E}$ \begin{itemize} \item all commutators $[\mathbf{M}_i,\mathbf{M}_j]$, $i\neq j$, belong to $A_{1}$; \item all numbers $\dfrac{S(\mathbf{M}_i)(0)}{\mathbf{M}_i'(0)^{2}-1}$, $i=1,\ldots ,n+1$, are different. \end{itemize} \end{lemma} Here and below $S(f)$ is the \href{https://en.wikipedia.org/wiki/Schwarzian_derivative}{Schwarzian derivative} of $f$, \begin{equation*} S(f)(z)=\frac{f'''(z)}{f'(z)}-\frac 32\left(\frac{f''(z)}{f'(z)}\right)^{2}. \end{equation*} In \cite{Shch84}, Shcherbakov proved that for a~generic foliation, \emph{at least one} commutator $[\mathbf{M}_i,\mathbf{M}_j]$ belongs to $A_{1}$, see Section 6.3 of \cite{Shch06}. But it is easy to extend this result to \emph{all} pairs $i\neq j$ using analytic continuation along loops in~$\mathcal{A}_{n}$ that permute the singular points. The second part is proved in the same article but not explicitly stated, so one needs to go through the proof of Theorem 9 in \cite{Shch06} to verify that the assertion of the corollary after Lemma~5 is~the only property of $\mathcal{F}$ used in the proof. Similar results were achieved in \cite{BLL97,N94}. \begin{lemma*}[{}] Consider two hyperbolic germs $f, g$ such that \begin{itemize} \item $f'(0)^{2}\neq 1$, $g'(0)^{2}\neq 1$; \item $[f, g]\in A_{1}$, i.e., $[f, g]''(0)\neq 0$; \item $\displaystyle\frac{S(f)(0)}{f'(0)^{2}-1}\neq \frac{S(g)(0)}{g'(0)^{2}-1}$. \end{itemize} Then $[f, g]$ does not commute with $[f^{-1}, g]$; moreover, $[[f, g], [f^{-1}, g]]\in A_{3}$. \end{lemma*} This lemma is motivated by Proposition~7 in \cite{Shch84} (which coincides with the corollary after Lemma~5 in \cite{Shch06}) but provides an explicit pair of commutators that do not commute. \begin{proof} One can verify that \begin{equation*} S([f,g])(0)=\left[\left(\frac{S(f)}{(f')^{2}-1}-\frac{S(g)}{(g')^{2}-1}\right)\left(1-\frac1{(f')^{2}}\right)\left(1-\frac{1}{(g')^{2}}\right)\right]_{z=0} \end{equation*} thus \begin{equation*} \frac{S([f^{-1},g])(0)}{S([f,g])(0)}=-f'(0)^{2} \end{equation*} On the other hand, \begin{equation} \frac{[f^{-1},g]''(0)}{[f,g]''(0)}=-f'(0). \phantomsection \label{commutators-d2} \end{equation} Two last equalities prove that \begin{equation*} \frac{S([f^{-1},g](0)}{[f^{-1},g]''(0)}=f'(0)\frac{S([f,g](0)}{[f,g]''(0)} \end{equation*} The assertions of the lemma imply that this is not zero, thus \begin{equation*} \frac{S([f^{-1},g](0)}{[f^{-1},g]''(0)}\neq \frac{S([f,g](0)}{[f,g]''(0)}. \end{equation*} Finally, expanding $[f, g]\circ [f^{-1}, g]-[f^{-1},g]\circ [f,g]$ up to the fourth order, one can check that the above inequality is equivalent to \begin{equation*} ([f, g]\circ [f^{-1}, g]-[f^{-1},g]\circ [f,g])^{(4)}(0)\neq 0, \end{equation*} hence $[[f,g],[f^{-1},g]]\in A_{3}$. \end{proof} \section{Plan of the proof of \hyperref[main-theorem]{Main Theorem}% \label{plan-of-the-proof-of-main-theorem}% } We will construct the limit cycles as the lifts of loops in the infinite line. Note that if the monodromy map $\mathbf{M}_{k_{1}}\mathbf{M}_{k_{2}}\ldots \mathbf{M}_{k_l}:S\rightarrow S$ has a fixed point $p\neq 0$, then the corresponding loop $\gamma =\gamma _{k_l}\gamma _{k_{l-1}}\ldots \gamma _{k_{1}}$ lifts to a limit cycle $c$ starting from $p$; the projection of $c$ to the infinite line is~$\gamma $. We proceed in two steps. First we construct contracting monodromy maps that satisfy \hyperref[inclusion]{\textbf{inclusion}}, \hyperref[contraction]{\textbf{contraction}} and \hyperref[covering]{\textbf{covering}} assumptions formulated below. This is done in a~different way for two types of genericity assumptions, see Section “\hyperref[construction-of-contracting-maps]{Construction of contracting maps}”. Then we use the maps constructed on the first step to obtain limit cycles that satisfy the assertions of \hyperref[main-theorem]{Main Theorem}. On this step we use no information about the foliation except for existence of maps with prescribed properties, see Section “\hyperref[construction-of-limit-cycles]{Construction of limit cycles}”. \subsection{Step 1: contracting maps% \label{step-1-contracting-maps}% } We shall find two topological discs $\Delta ^{-}\subset \Delta ^{+}\subset S$ in the cross-section, $0\notin \Delta ^{+}$, an analytic chart $\zeta $ in $\Delta ^{+}$ and a tuple of monodromy maps $f_j$ with the following properties. Each $f_j$ is a composition of standard generators $\mathbf{M}_k$ of the monodromy group at infinity. For any splitting of this composition into two parts $f_j = f_j^{(t)}\circ f_j^{(h)}$, we will say that $f_j^{(t)}$ is a \emph{tail} of $f_j$ and $f_j^{(h)}$ is its \emph{head}. \begin{description} \item[{Inclusion:}] \leavevmode \phantomsection\label{inclusion} $f_j(\Delta ^{+})\subset \Delta ^{+}$ for any $j$. \item[{Contraction:}] \leavevmode \phantomsection\label{contraction} All compositions of the form $f_i^{(t)}\circ f_j^{(h)}$, $f_i^{(h)}\neq \id$, $f_j^{(h)}\neq \id$, contract in $(f_j^{(h)})^{-1}\circ f_i^{(h)}(\Delta ^{+})\cap \Delta ^{+}$ with respect to the chart $\zeta $. In particular, all $f_j$ contract in $\Delta ^{+}$. \item[{Covering:}] \leavevmode \phantomsection\label{covering} Images of $\Delta ^{-}$ under $f_j$ cover $\Delta ^{-}$. \end{description} We will also suppose that the compositions $f_j$ do not contain identical subcompositions, otherwise we remove them. Obviously this does not break any of other requirements on $f_j$. \subsection{Step 2: limit cycles% \label{step-2-limit-cycles}% } Here we use the maps $f_j$ to construct infinitely many homologically independent limit cycles. We will use not a particular construction of $f_j$, but only the assumptions \hyperref[inclusion]{\textbf{inclusion}}, \hyperref[contraction]{\textbf{contraction}} and \hyperref[covering]{\textbf{covering}}, so the arguments work for both sets $A^{LC1}_n, A^{LC2}_n$. The main motivation is the following lemma\DUfootnotemark{id16}{id17}{1}. \DUfootnotetext{id17}{id16}{1}{% Some people attribute this statement to Hutchinson \cite{H81}, but we failed to find exactly this statement in this article. } \begin{lemma}\phantomsection\label{hutchinson} Under assumptions above, for an open $U\subset \Delta ^{-}$ and $\varepsilon >0$ there exists a word $w = j_{1} \ldots j_N$ such that the monodromy map $f_w=f_{j_{1}}\circ f_{j_{2}}\circ \ldots \circ f_{j_N}$ satisfies $f_w(\Delta ^{+})\subset U$ and $|f'_w|<\varepsilon $ in $\Delta ^{+}$. \end{lemma} \DUadmonition[note]{ \DUtitle[note]{Note} We will use this lemma only for $\varepsilon <1$. In this case, the map $f_w$ obviously has a fixed point in $U$. It corresponds to a limit cycle with arbitrarily small multiplier which passes through~$U$. } This lemma enables us to prove assertions b)–d) of \hyperref[main-theorem]{Main Theorem}. The proof of homological independence is more complicated. \begin{proof} Take a point $p\in U\subset \Delta ^{-}$. Due to the \hyperref[covering]{\textbf{covering}} assumption, there exists an index $j_{1}$ such that $p \in f_{j_{1}}(\Delta ^{-})$. Now, take the preimage $f_{j_{1}} ^{-1}(p)\subset \Delta ^{-}$, and repeat the arguments; we obtain a map $f_{j_{2}}$ such that $p\in f_{j_{1}}( f_{j_{2}} (\Delta ^{-}))$. Repeating the procedure, we get a word $w=j_{1}\, j_{2}\, \ldots \,j_N$ such that $f_w (\Delta ^{-}) = f_{j_{1}}\circ f_{j_{2}}\circ \cdots \circ f_{j_N} (\Delta ^{-})$ contains $p$. The diameter of the image $f_w(\Delta ^{+})$ tends to zero as $N$ tends to infinity, since all maps $f_j$ \hyperref[contraction]{contract} in $\Delta ^{+}$. So, if $N$ is large enough, the fact that $p \in f_w (\Delta ^{+})$ would imply $f_w (\Delta ^{+})\subset U$ and $|f'_w|<\varepsilon $ in the whole $\Delta ^{+}$. \end{proof} \subsection{A neighborhood in $\mathcal{B}_n$% \label{a-neighborhood-in-n}% } Note that assumptions \hyperref[inclusion]{\textbf{inclusion}}, \hyperref[contraction]{\textbf{contraction}} and \hyperref[covering]{\textbf{covering}} are robust in the following sense. Considers a~foliation $\mathcal{F}\in \mathcal{A}_{n}$ that possesses a~tuple of~monodromy maps that satisfy these assumptions. Then there exists a~bidisc $D\subset \mathbb{C}^{2}$ and $\varepsilon >0$ such that any foliation $\mathcal{F}'$ of~$D$ which is $\varepsilon $-close to $\mathcal{F}$ in $D$ possesses monodromy maps that satisfy these assumptions. Since Step 2 relies only on these properties, such foliation $\mathcal{F}'$ satisfies assertions of Main Theorem. In particular, we have the following corollary. \begin{corollary}\phantomsection\label{cor-nbhd} Any foliation from some open neighborhood $\mathcal{U}$, $\mathcal{A}^{LC1,2}_{n}\subset \mathcal{U}\subset \mathcal{B}_{n+1}$, possesses an infinite number of limit cycles satisfying assertions a)–d) of the Main theorem. \end{corollary} \section{Construction of contracting maps% \label{construction-of-contracting-maps}% } \subsection{Multiplicative density% \label{multiplicative-density}% } We put the following genericity assumptions on the foliation: \begin{itemize} \item \phantomsection\label{density-condition} the characteristic numbers of two of the singular points at infinity (say, $\lambda _{1}$ and $\lambda _{2}$) generate a dense subgroup in $\mathbb{C}/\mathbb{Z}$; \item the corresponding monodromy maps $\mathbf{M}_{1}, \mathbf{M}_{2}$ do not commute. \end{itemize} Each condition defines a~full measure set. For the former condition it is clear, and for the latter one see \hyperref[scherbakov-unsolvable]{Lemma 4}. After a holomorphic coordinate change, we may and will assume that the map $\mathbf{M}_{1}$ is linear. If this map expands, let us replace it with its inverse. Then $\Im \lambda _{1}>0$. Let us pass to the chart $\zeta =\frac{\log z}{2\pi i}$, $\zeta \in \mathbb{C}/\mathbb{Z}$. In this chart, points with large $\Im \zeta $ correspond to points $z$ close to the origin. Let ${\tilde{\mathbf{M}}}_{1}:\zeta \mapsto \zeta +\lambda _{1}$ and ${\tilde{\mathbf{M}}}_{2}$ be the maps $\mathbf{M}_{1}$ and $\mathbf{M}_{2}$ written in the chart $\zeta $. These maps are defined for sufficiently large $\Im \zeta $, and \begin{align*} {\tilde{\mathbf{M}}}_{2}(\zeta )&=\zeta +\lambda _{2}+o(1),\\ {\tilde{\mathbf{M}}}_{2}'(\zeta )&=1+o(1) \end{align*} as $\Im \zeta \rightarrow \infty $. Since $\mathbf{M}_{1}$ does not commute with $\mathbf{M}_{2}$, the map ${\tilde{\mathbf{M}}}_{2}$ is not linear, hence ${\tilde{\mathbf{M}}}_{2}'$ is not identically one. Let $\Delta ^{+}$ be a~small closed disc such that either ${\tilde{\mathbf{M}}}_{2}$ or its inverse uniformly contracts in $\Delta ^{+}$. Without loss of generality we can and shall assume that it is ${\tilde{\mathbf{M}}}_{2}$, \begin{equation} \max_{\zeta \in \Delta ^{+}}|{\tilde{\mathbf{M}}}_{2}'(\zeta )|<1. \phantomsection \label{m2-contracts} \end{equation} Next, let $\zeta _{0}$ be the center of $\Delta ^{+}$, put $T=\zeta _{0}-{\tilde{\mathbf{M}}}_{2}(\zeta _{0})$. Note that ${\tilde{\mathbf{M}}}_{2}(\Delta ^{+})+T\Subset \Delta ^{+}$. Choose a~much smaller disc $\Delta ^{-}\subset \Delta ^{+}$ with the same center, \begin{equation} \diam(\Delta ^{-})<\dist({\tilde{\mathbf{M}}}_{2}(\partial \Delta ^{+})+T, \partial \Delta ^{+}). \phantomsection \label{delta-minus-small} \end{equation} Choose a~tuple of vectors $T_j=k_j\lambda _{1}+l_j\lambda _{2}\in \mathbb{C}/\mathbb{Z}$ such that \begin{equation} \Delta ^{-}\subset \bigcup _j ({\tilde{\mathbf{M}}}_{2}(\Delta ^{-})+T_j)\subset \bigcup _j ({\tilde{\mathbf{M}}}_{2}(\Delta ^{+})+T_j)\subset \Delta ^{+}. \phantomsection \label{tj} \end{equation} Due to \eqref{delta-minus-small}, it is enough to take $T_j$ such that ${\tilde{\mathbf{M}}}_{2}(\Delta ^{-})+T_j$ cover $\Delta ^{-}$ and $|T-T_j|<\diam(\Delta ^{-})$. Due to the \hyperref[density-condition]{density condition}, these $T_j$ can be chosen of the form $T_j=k_j\lambda _{1}+l_j\lambda _{2}$. Now, let us choose $f_j$ so that they approximate the maps $T_j\circ {\tilde{\mathbf{M}}}_{2}$ in $\Delta ^{+}$. Note that the map ${\tilde{\mathbf{M}}}_{2}^{l_j}\circ {\tilde{\mathbf{M}}}_{1}^{k_j}$ approximates the shift $\zeta \mapsto \zeta +T_j$ for large $\Im \zeta $, hence for $N$ large enough, the map ${\tilde{\mathbf{M}}}_{1}^{-N}\circ {\tilde{\mathbf{M}}}_{2}^{l_j}\circ {\tilde{\mathbf{M}}}_{1}^{k_j+N}$ is very close to the shift by $T_j$ in $C^{1}({\tilde{\mathbf{M}}}_{2}(\Delta ^{+}))$. Therefore, we can take the maps \begin{equation*} f_j=(\mathbf{M}_{1}^{-N}\circ \mathbf{M}_{2}^{l_j}\circ \mathbf{M}_{1}^{k_j+N})\circ \mathbf{M}_{2}. \end{equation*} Let ${\tilde{f}}_j$ be the map $f_j$ written in the chart~$\zeta $. For sufficiently large $N$, these maps satisfy \hyperref[inclusion]{\textbf{inclusion}} and \hyperref[covering]{\textbf{covering}} assumptions from the \hyperref[plan-of-the-proof-of-main-theorem]{plan of the proof}. Obviously, each~${\tilde{f}}_j$ contracts in~$\Delta ^{+}$. Now, we have to prove \hyperref[contraction]{\textbf{contraction}} assumption, i.e. that for $N$ large enough all compositions ${\tilde{f}}_i^{(t)}\circ {\tilde{f}}_j^{(h)}$ contract. Recall that ${\tilde{f}}_j^{(h)}\neq \id$, ${\tilde{f}}_i^{(t)}\neq {\tilde{f}}_i$. Since the map ${\tilde{\mathbf{M}}}_{1}^{-N}\circ {\tilde{\mathbf{M}}}_{2}^{l_j}\circ {\tilde{\mathbf{M}}}_{1}^{k_j+N}$ and its heads approximate shifts in $C^{1}(\Delta ^{+})$, for $N$ large enough the derivative of ${\tilde{f}}_j^{(h)}$ is close to the set ${\tilde{\mathbf{M}}}_{2}'(\Delta ^{+})$. On the other hand, the derivative of ${\tilde{f}}_i^{(t)}$ on the set ${\tilde{f}}_i^{(h)}(\Delta ^{+})$ is arbitrarily close to one. Thus we can make the derivative of the composition ${\tilde{f}}_i^{(t)}\circ {\tilde{f}}_j^{(h)}$ on the set $\Delta ^{+}\cap \left(({\tilde{f}}_j^{(h)})^{-1}\circ {\tilde{f}}_i^{(h)}(\Delta ^{+})\right)$ arbitrarily close to ${\tilde{\mathbf{M}}}_{2}'(\Delta ^{+})$. Now \eqref{m2-contracts} yields \hyperref[contraction]{\textbf{contraction}} assumption. \subsection{Unsolvable monodromy group% \label{unsolvable-monodromy-group}% } In this case, the construction is similar, but instead of the logarithmic chart we use the \hyperref[fatou-coordinates]{Fatou chart} for one of the parabolic monodromy maps, and there are more technical difficulties. \subsubsection{Genericity assumptions and preliminary considerations% \label{genericity-assumptions-and-preliminary-considerations}% } Let $\mathcal{A}_{n}^{LC2}\subset \mathcal{A}_{n}'$ be the set of polynomial foliations such that \begin{itemize} \item $g_{1}≔[\mathbf{M}_{1},\mathbf{M}_{2}]\in A_{1}$, $g_{2}≔[\mathbf{M}_{1}^{-1},\mathbf{M}_{2}]\in A_{1}$; \item $g_{3}≔[g_{2},g_{1}]$ is not the identity map\DUfootnotemark{id19}{id20}{2}; \item $\mu _{1}\notin \mathbb{R}$; \item the numbers $1$, $\mu _{1}$, $\mu _{1}^{-1}$, $\mu _{2}^{-1}$, $\mu _{1}^{-1}\mu _{2}^{-1}$, $\mu _{1}\mu _{2}^{-1}$ are all different. \end{itemize} \DUfootnotetext{id20}{id19}{2}{% \hyperref[unsolvable-details]{Lemma 3} implies that for a~generic~$\mathcal{F}$ we have $g_{3}\in A_{3}$, but our construction works for a~slightly broader set of~foliations. } Due to \hyperref[unsolvable-details]{Lemma 3} and the fact that $\sum \lambda _i=1$ is the only relation on $\lambda _i$, the complement $\mathcal{A}_{n}\smallsetminus \mathcal{A}_{n}^{LC2}$ is a~real analytic subset of $\mathcal{A}_{n}$. Consider a~foliation~$\mathcal{F}$ from this set. Put $g_{4}≔[g_{3},g_{2}]$. Due to \hyperref[lemma-parabolic-commutator]{Lemma 2}, $g_{4}\neq \id$. Let $\zeta $ be a \hyperref[fatou-coordinates]{Fatou chart} for $g_{1}$ in the negative sector. \DUtopic[]{ \DUtitle[title]{Convention} In this Section, tilde above means that a~map is written in the chart~$\zeta $. In particular, ${\tilde{g}}_{1}(\zeta )= \zeta +1$. } \hyperref[lemma-parabolic-commutator]{Lemma 2} and Item b) of \hyperref[inzetachart]{Lemma 1} imply \begin{align*} g_{1}(z)&=z+\frac{g_{1}''(0)}{2}z^{2}+o(z^{2})&{\tilde{g}}_{1}(\zeta )&=\zeta +1+o(1)\\ g_{2}(z)&=z+\frac{g_{2}''(0)}{2}z^{2}+o(z^{2})&{\tilde{g}}_{2}(\zeta )&=\zeta +\frac{g_{2}''(0)}{g_{1}''(0)}+o(1)\\ g_{3}(z)&=z+az^{p+1}+o(z^{p+1})&{\tilde{g}}_{3}(\zeta )&=\zeta -\frac{(-2)^pa}{g_{1}''(0)^p}\zeta ^{1-p}+o(\zeta ^{1-p})\\ g_{4}(z)&=z+\frac{a(p-1)g_{2}''(0)}2z^{p+2}+o(z^{p+2})&{\tilde{g}}_{4}(\zeta )&=\zeta +\frac{(-2)^pa(p-1)g_{2}''(0)}{g_{1}''(0)^{p+1}}\zeta ^{-p}+o(\zeta ^{-p}), \end{align*} where $a\in \mathbb{C}\smallsetminus \set{0}$. Put ${\tilde{a}}=-\frac{(-2)^pa}{g_{1}''(0)^p}$, ${\tilde{b}}=\frac{(-2)^pa(p-1)g_{2}''(0)}{g_{1}''(0)^{p+1}}$. Due to \eqref{commutators-d2}, $g_{2}''(0)=-\mu _{1}g_{1}''(0)$, hence \begin{align*} {\tilde{g}}_{1}(\zeta )&=\zeta +1+o(1)\\ {\tilde{g}}_{2}(\zeta )&=\zeta -\mu _{1}+o(1)\\ {\tilde{g}}_{3}(\zeta )&=\zeta +\frac{{\tilde{a}}}{\zeta ^{p-1}}+o\left(\frac{1}{\zeta ^{p-1}}\right)\\ {\tilde{g}}_{4}(\zeta )&=\zeta +\frac{{\tilde{b}}}{\zeta ^p}+o\left(\frac{1}{\zeta ^p}\right). \end{align*} Since $\frac{{\tilde{b}}}{{\tilde{a}}}=-\frac{(p-1)g_{2}''(0)}{g_{1}''(0)}=(p-1)\mu _{1}\notin \mathbb{R}$, each number $T\in \mathbb{C}$ can be represented as \begin{equation} T=\xi (T){\tilde{a}}+\eta (T){\tilde{b}}. \phantomsection \label{xi-eta} \end{equation} We will construct $f_j$ as compositions of the maps $g_{1}, g_{2}, g_{3}, g_{4}$. \subsubsection{Construction of $f_j$% \label{construction-of-f-j}% } Let $\Delta ^{+}$ be a small disc that we shall choose later. Now we just say that ${\tilde{g}}_{2}^\pm \in \set{{\tilde{g}}_{2},{\tilde{g}}_{2}^{-1}}$ contracts in $\Delta ^{+}$, \begin{equation} \max_{\zeta \in \Delta ^{+}}|({\tilde{g}}_{2}^\pm )'(\zeta )|=q<1. \phantomsection \label{g2-contracts} \end{equation} Since ${\tilde{g}}_{2}(\zeta )-\zeta \rightarrow -\mu _{1}$ as $\zeta \rightarrow \infty $, we may and will assume that $|{\tilde{g}}_{2}^\pm (\zeta )-\zeta |<|\mu _{1}|+1$ for $\zeta \in \Delta ^{+}$. As in the \hyperref[multiplicative-density]{previous case}, take a~disc $\Delta ^{-}\subset \Delta ^{+}$ and a~tuple of vectors $T_j\in \mathbb{C}$ such that \begin{equation} \Delta ^{-}\subset \bigcup _j ({\tilde{g}}_{2}^\pm (\Delta ^{-})+T_j)\subset \bigcup _j ({\tilde{g}}_{2}^\pm (\Delta ^{+})+T_j)\subset \Delta ^{+}. \phantomsection \label{g2-tj} \end{equation} It is easy to see that the right inclusion implies $|T_j+\mu _{1}|<2$ or $|T_j-\mu _{1}|<2$, hence $|T_j|<|\mu _{1}|+2$. Put $\xi _j=\xi (T_j)$, $\eta _j=\eta (T_j)$, see \eqref{xi-eta}. Similarly to the \hyperref[multiplicative-density]{previous case}, we will choose the compositions ${\tilde{f}}_j$ so that they will approximate the maps $\zeta \mapsto {\tilde{g}}_{2}^\pm (\zeta )+T_j$ in $C^{1}(\Delta ^{+})$. It turns out that we can use the compositions ${\tilde{f}}_j={\tilde{F}}_j\circ {\tilde{g}}_{2}^\pm $, where \begin{equation} {\tilde{F}}_j ≔ {\tilde{g}}_{1}^{-N}\circ {\tilde{g}}_{3}^{k_j}\circ {\tilde{g}}_{4}^{l_j}\circ {\tilde{g}}_{1}^{N},\quad k_j=[N^{p-1}\xi _j],\quad l_j=[N^p\eta _j]. \phantomsection \label{f-j} \end{equation} Here $N$ is a large number that we will choose later. Let us prove that ${\tilde{F}}_j$ approximate the translations $\zeta \mapsto \zeta +T_j$ in $C^{1}({\tilde{g}}_{2}^\pm (\Delta ^{+}))$. \begin{lemma}\phantomsection\label{f-j-head} For $N$ large enough, each head ${\tilde{F}}_j^{(h)}$ such that $\left(F_j^{(h)}\right)'(0)=1$ is close to a~translation $\zeta \mapsto \zeta +T$ in $C^{1}({\tilde{g}}_{2}^\pm (\Delta ^{+}))$. Moreover, $\Re T>-|\Re \mu _{1}|-2$ and $|\Im T|$ is bounded by a~number that does not depend on $\Delta ^{+}$. In particular, \setcounter{listcnt0}{0} \begin{list}{\alph{listcnt0})} { \usecounter{listcnt0} \setlength{\rightmargin}{\leftmargin} } \item ${\tilde{g}}_{1}^n$ is the translation by $n$; \item ${\tilde{g}}_{4}^{n}\circ {\tilde{g}}_{1}^N$, $|n|\leqslant |k_j|$ is close to the translation by~$N+\frac{n{\tilde{b}}}{N^p}$; \item ${\tilde{g}}_{3}^{n}\circ {\tilde{g}}_{4}^{l_j}\circ {\tilde{g}}_{1}^N$, $|n|\leqslant |k_j|$ is close to the translation by~$N+\frac{n{\tilde{a}}}{N^{p-1}}+{\tilde{b}}\eta _j$; \item ${\tilde{g}}_{1}^{-n}\circ {\tilde{g}}_{3}^{k_j}\circ {\tilde{g}}_{4}^{l_j}\circ {\tilde{g}}_{1}^N$, $0\leqslant n\leqslant N$, is close to the translation by $T_j+N-n$. \end{list} \end{lemma} \begin{proof} We shall prove this lemma only for $\xi _j>0$ and $\eta _j>0$. For other cases, it is enough to replace ${\tilde{g}}_{3}$ and (or) ${\tilde{g}}_{4}$ by its inverse map. Let us prove the assertions of the lemma for ${\tilde{F}}_j^{(h)}={\tilde{g}}_{4}^{l_j}\circ {\tilde{g}}_{1}^N$. Recall that ${\tilde{g}}_{4}(\zeta )-\zeta ={\tilde{b}}\zeta ^{-p}+o(\zeta ^{-p})$, hence \begin{equation*} {\tilde{F}}_j^{(h)}(\zeta )={\tilde{g}}_{4}^{l_j}(\zeta +N)=\zeta +N+\frac{{\tilde{b}}l_j}{(\zeta +N)^p}+o(1)=\zeta +N+\frac{{\tilde{b}}[N^p\eta _j]}{N^p}+o(1)=\zeta +N+{\tilde{b}}\eta _j+o(1) \end{equation*} as $N\rightarrow \infty $, $\zeta \in {\tilde{g}}_{2}^\pm (\Delta ^{+})$. Therefore, ${\tilde{F}}_j^{(h)}$ is $C^{0}$-close to the translation by $N+{\tilde{b}}\eta _j$. Let us prove that the derivative of ${\tilde{F}}_j^{(h)}$ is close to one. Due to Item c) of \hyperref[inzetachart]{Lemma 1}, $\log {\tilde{g}}_{4}'(\zeta )=o({\tilde{g}}_{4}(\zeta )-\zeta )=o(\zeta ^{-p})$ as $\zeta \rightarrow \infty $, hence \begin{equation*} \log\left({\tilde{F}}_j^{(h)}\right)'(\zeta )=\sum _{k=0}^{l_j-1}\log {\tilde{g}}_{4}'({\tilde{g}}_{4}^k(\zeta +N))\leqslant N^p\eta _j\log {\tilde{g}}_{4}'(N+O(1))=o(1) \end{equation*} as $N\rightarrow \infty $, $\zeta \in \Delta ^{+}$. Thus \begin{equation*} \left({\tilde{F}}_j^{(h)}\right)'(\zeta )=1+o(1). \end{equation*} Finally, in this case ${\tilde{F}}_j^{(h)}$ is $C^{1}$-close to~the~translation by $N+{\tilde{b}}\eta _j$. All particular cases listed in the statement of the lemma can be proved in the same way. Also, the estimate $|T_j|<|\mu _{1}|+2$ yields a~uniform estimate on the imaginary parts of the translation vectors. Consider a~head~${\tilde{F}}_j^{(h)}$, $\left(F_j^{(h)}\right)'(0)=1$, not listed explicitly in the statement of the lemma. Since $g_{1}$ and $g_{2}$ have no heads $g$ with $g'(0)=1$, ${\tilde{F}}_j^{(h)}$ differs from a~head of type b) or c) by a~composition with a~head ${\tilde{g}}$ either of ${\tilde{g}}_{3}^{\pm 1}$, or of ${\tilde{g}}_{4}^{\pm 1}$ such that $g'(0)=1$. Since ${\tilde{g}}$ is applied to points $\zeta $ with $\Re \zeta =N+O(1)$, it can be made arbitrarily $C^{1}$-close to its “translational part” $\zeta \mapsto \zeta +T$, $T=\lim_{\zeta \rightarrow \infty }{\tilde{g}}(\zeta )-\zeta $. Thus ${\tilde{F}}_j^{(h)}$ is close to a~translation $\zeta \mapsto \zeta +T'$ with bounded $\Im T'$ as well. \end{proof} \subsubsection{Choice of $\Delta ^{+}$% \label{choice-of}% } The construction relies on the following simple observation. \begin{lemma}\phantomsection\label{u-exists} For a~collection of~hyperbolic maps $F_j:(\mathbb{C}, 0)\rightarrow (\mathbb{C}, 0)$, $F_j'(0)\neq 1$, there exists an~arbitrarily thick strip \begin{equation} U=\Set{\zeta |\Re \zeta >R, n_{-}<\Im \zeta <n_{+}},\quad n_{+}-n_{-}>C, \phantomsection \label{u-thick} \end{equation} such that $U$ does not overlap its images under ${\tilde{F}}_j$. \end{lemma} \begin{proof} Recall that ${\tilde{F}}_j(\zeta )=k_j\zeta +b_j+o(1)$, see \hyperref[inzetachart]{Lemma 1}. For a~map~$F_j$ with $k_j\in \mathbb{R}$, the affine term $\zeta \mapsto k_j\zeta +b_j$ of~${\tilde{F}}$ has invariant horizontal line $\Im \zeta =y_j≔\frac{\Im b_j}{1-k_j}$, and for $|\Im \zeta -y_j|>\frac{C}{|k-1|}$ we have $|\Im \zeta -\Im (k\zeta +b)|>C$. Consider a~strip $U$ such that $|\Im \zeta -y_j|>\frac{C}{|k_j-1|}$ whenever $k_j\in \mathbb{R}$. Clearly, for $R$ large enough, all maps~${\tilde{F}}_j$ will be close enough to their respective affine terms, hence ${\tilde{F}}_j(U)\cap U=\varnothing $. Finally, we enlarge $R$ so that the assertion is satisfied for the maps ${\tilde{F}}_j$ with $k_j\notin \mathbb{R}$. \end{proof} Let $C_{1}$ be the estimate on $|\Im T|$ from \hyperref[f-j-head]{Lemma 7}; put $C_{2}=\max(C_{1}, |\Re \mu _{1}|+2)$. Fix a~strip \eqref{u-thick}, $C=2C_{2}+|\Im \mu _{1}|$, such that \begin{equation} \left(f_i^{(h)}\right)'(0)\neq \left(f_j^{(h)}\right)'(0)\quad\Rightarrow \quad \left(f_i^{(h)}\right)^{-1}\circ \left(f_j^{(h)}\right)(U)\cap U=\varnothing . \phantomsection \label{u-hyperbolic} \end{equation} Recall that ${\tilde{g}}_{2}(\zeta )=\zeta -\mu _{1}+o(1)$. Hence there exists a~small disc $\Delta \subset U$ such that the distance between $\mathbb{C}\smallsetminus U$ and $\Delta \cup {\tilde{g}}_{2}(\Delta )$ is greater than $C_{2}$, and $|{\tilde{g}}_{2}(\zeta )-\zeta +\mu _{1}|<1$ for $\zeta \in \Delta $. Shrinking $\Delta $ if necessary, we may and will assume that $\forall \zeta \in \Delta $ we have $|{\tilde{g}}_{2}'(\zeta )|\neq 1$. If $|{\tilde{g}}_{2}'(\zeta )|<1$ in $\Delta $, then we put $\Delta ^{+}=\Delta $, $g_{2}^\pm =g_{2}$, otherwise we put $\Delta ^{+}={\tilde{g}}_{2}(\Delta )$ and $g_{2}^\pm =g_{2}^{-1}$. Then ${\tilde{g}}_{2}^\pm $ contracts in $\Delta ^{+}$, see \eqref{g2-contracts}. Finally, since the distance between $\mathbb{C}\smallsetminus U$ and $\Delta \cup {\tilde{g}}_{2}(\Delta )$ is greater than $C_{2}$, \hyperref[f-j-head]{Lemma 7} implies that for $N$ large enough, for each head ${\tilde{F}}_j^{(h)}$ of ${\tilde{F}}_j$ \begin{equation} \left(F_j^{(h)}\right)'(0)=1\quad\Rightarrow \quad {\tilde{F}}_j^{(h)}(\Delta ^{+})\subset U,\qquad \left|\left({\tilde{F}}_j^{(h)}\right)'(\zeta )\right|<\frac{1}{\sqrt{q}},\qquad \left|\left({\tilde{F}}_j^{(t)}\right)'(\zeta )\right|<\frac{1}{\sqrt{q}} \phantomsection \label{u-parabolic} \end{equation} for $\zeta \in {\tilde{g}}_{2}^\pm (\Delta ^{+})$, where $q$ is given by \eqref{g2-contracts}. \subsubsection{Proof of the assumptions% \label{proof-of-the-assumptions}% } Let us prove that for $N$ large enough, the compositions ${\tilde{f}}_j$ satisfy the assumptions listed in the \hyperref[plan-of-the-proof-of-main-theorem]{plan of the proof}. For assumptions \hyperref[inclusion]{\textbf{inclusion}} and \hyperref[covering]{\textbf{covering}}, this immediately follows from \hyperref[f-j-head]{Lemma 7} and the definition of $T_j$. Let us prove that \eqref{u-hyperbolic} and \eqref{u-parabolic} imply the \hyperref[contraction]{\textbf{contraction}} property. Consider a~composition of the form ${\tilde{f}}_i^{(t)}\circ {\tilde{f}}_j^{(h)}$. Recall that ${\tilde{f}}_i$ and ${\tilde{f}}_j$ are compositions of the commutators ${\tilde{g}}_{1}^{\pm 1}$ and ${\tilde{g}}_{2}^{\pm 1}$. Therefore, we can rewrite ${\tilde{f}}_i^{(h)}$, ${\tilde{f}}_j^{(h)}$ and the corresponding tails as \begin{align*} {\tilde{f}}_i^{(h)}&=\left({\tilde{g}}_k^{\pm 1}\right)^{(h)}\circ {\tilde{f}}_i^{(ph)}&{\tilde{f}}_i^{(t)}&={\tilde{f}}_i^{(pt)}\circ \left({\tilde{g}}_k^{\pm 1}\right)^{(t)},\\ {\tilde{f}}_j^{(h)}&= \left({\tilde{g}}_l^{\pm 1}\right)^{(h)}\circ {\tilde{f}}_j^{(ph)},&{\tilde{f}}_j^{(t)}&={\tilde{f}}_j^{(pt)}\circ \left({\tilde{g}}_k^{\pm 1}\right)^{(t)}, \end{align*} where \begin{itemize} \item $f_i^{(ph)}$, $f_i^{(pt)}$, $f_j^{(ph)}$, $f_j^{(pt)}$ are compositions of $g_{1}^{\pm 1}$ and $g_{2}^{\pm 1}$; \item $\set{k, l}\subset \set{1, 2}$; \item $\left(g_k^{\pm 1}\right)^{(h)}$ and $\left(g_l^{\pm 1}\right)^{(h)}$ may be empty but may not coincide with $g_k^{\pm 1}$ or $g_l^{\pm 1}$. \end{itemize} If the maps~$\left(g_k^{\pm 1}\right)^{(h)}$ and~$\left(g_l^{\pm 1}\right)^{(h)}$ have different multipliers, then \eqref{u-hyperbolic} and \eqref{u-parabolic} imply that $(f_j^{(h)})^{-1}\circ f_i^{(h)}(\Delta ^{+})\cap \Delta ^{+}=\varnothing $. Next, suppose that $\left(g_k^{\pm 1}\right)^{(h)}$ and~$\left(g_l^{\pm 1}\right)^{(h)}$ have equal multipliers. It is easy to check that our assumption on~$\mu _{1}$, $\mu _{2}$ implies that in this case $\left(g_k^{\pm 1}\right)^{(t)}\circ \left(g_l^{\pm 1}\right)^{(h)}$ is one of the maps $\id$, $g_k^{\pm 1}$, $g_l^{\pm 1}$. In the first case, we just eliminate the middle part from \begin{equation*} f_i^{(t)}\circ f_j^{(h)}={\tilde{f}}_i^{(pt)}\circ \left[\left({\tilde{g}}_k^{\pm 1}\right)^{(t)}\circ \left({\tilde{g}}_l^{\pm 1}\right)^{(h)}\right]\circ {\tilde{f}}_j^{(ph)}, \end{equation*} and in the two latter cases we can regard the middle part either as a~part of~$f_i^{(t)}$, or as a part of $f_j^{(h)}$. Hence we can assume that both $f_i^{(t)}$ and $f_j^{(h)}$ are parabolic maps. Finally, due to \eqref{u-parabolic}, for parabolic $f_i^{(t)}$ and $f_j^{(h)}$ we have \begin{equation*} \left|\left(f_i^{(t)}\circ f_j^{(h)}\right)'(\zeta )\right|<q\times \frac{1}{\sqrt{q}}\times \frac{1}{\sqrt{q}}=1. \end{equation*} Hence, the maps $f_j$ satisfy the \hyperref[contraction]{\textbf{contraction}} requirement. \section{Construction of limit cycles% \label{construction-of-limit-cycles}% } Consider a polynomial foliation $\mathcal{F}\in \mathcal{A}_n'$. Suppose that there exist domains $\Delta ^{-}\subset \Delta ^{+}$, a chart $\zeta $ and a tuple of monodromy maps $f_j$ that satisfy the assumptions listed in the \hyperref[plan-of-the-proof-of-main-theorem]{plan of the proof}. In this section we shall show that such foliation satisfies the assertions of \hyperref[main-theorem]{Main Theorem}. The proof is based on the following simple observation. \begin{lemma}\phantomsection\label{independent-cycles} Suppose that a collection of limit cycles $c_j$ satisfies the following: \setcounter{listcnt0}{0} \begin{list}{\alph{listcnt0})} { \usecounter{listcnt0} \setlength{\rightmargin}{\leftmargin} } \item all cycles $c_j$ are simple, i.e., have no self-intersections; \item their multipliers $\mu (c_j)$ satisfy $0<|\mu (c_j)|<|\mu (c_{1})\cdots \mu (c_{j-1})|$; \item $c_i\cap c_j=\varnothing $ for $i\neq j$. \end{list} Then these cycles are homologically independent. \end{lemma} \begin{proof} Since all these cycles are simple and do not intersect each other, a possible dependency has the form $\pm [c_{i_{1}}] \pm [c_{i_{2}}]\pm \ldots \pm [c_{i_s}]=0$, $i_{1}<i_{2}<\ldots <i_s$. However such dependence implies the equality on multipliers, $\mu (c_{i_{1}})^{\pm 1}\mu (c_{i_{2}})^{\pm 1}\ldots \mu (c_{i_{s-1}})^{\pm 1}=\mu (c_{i_s})$, which is impossible due to the inequality \begin{equation*} |\mu (c_{i_s})|<|\mu (c_{1})\ldots \mu (c_{i_s-1})|\leqslant |\mu (c_{i_{1}})\ldots \mu (c_{i_{s-1})}|\leqslant |\mu (c_{i_{1}})^{\pm 1} \mu (c_{i_{2}})^{\pm 1}\ldots \mu (c_{i_{s-1}})^{\pm 1}|. \end{equation*}\end{proof} \DUadmonition[note]{ \DUtitle[note]{Note} In earlier papers \cite{Il78,SRO98} the authors used similar arguments, but they estimated $\int _{c_j} x\,dy-y\,dx$ instead of multipliers. This led to much more complicated computations. } As we mentioned above, \hyperref[hutchinson]{Lemma 5} enables us to construct cycles with arbitrarily small multipliers, but these cycles may be neither simple, nor disjoint. The following two lemmas fill these gaps. \begin{lemma}\phantomsection\label{avoid-finite} Let $D\subset \mathbb{R}^{n}$ be a closed disc, $g_{1},g_{2}:D\rightarrow D$ be two injective continuous maps such that $g_{1}(D)\cap g_{2}(D)=\varnothing $, $\Sigma \subset D$ be a finite subset. Then for $m$ large enough there exists a periodic orbit \begin{equation} p_{0}, p_{1}, \ldots , p_m=p_{0}, p_{i+1}\in \set{g_{1}(p_i), g_{2}(p_i)} \phantomsection \label{periodic} \end{equation} that never meets $\Sigma $. \end{lemma} \begin{lemma}\phantomsection\label{simple-subcycle} Given \begin{itemize} \item an open subset $U\Subset \Delta ^{-}$; \item two maps $g_{1}=f_{i_{1}}\circ \ldots \circ f_{i_s}$, $g_{2}=f_{j_{1}}\circ \ldots \circ f_{j_r}$, $g_i:\Delta ^{+}\rightarrow U$ with disjoint images; \item a finite set $\Sigma \subset S$; \item a positive number $\varepsilon $, \end{itemize} there exists a finite set $\Sigma '\subset \Delta ^{+}$ such that the following holds. Suppose that a~periodic orbit \eqref{periodic} never visits $\Sigma '$. Since $p_{0}$ is a fixed point of some monodromy map, it corresponds to a~cycle~$c$. Let $c'$ be its simple subcycle. Then $c'$ visits $U$, never visits $\Sigma $, and the modulus of its multiplier is less than~$\varepsilon $. \end{lemma} Let us deduce \hyperref[main-theorem]{Main Theorem} from these two lemmas. \begin{proof}[{Proof of Main Theorem}] Fix a~sequence of points $x_k$ in the interior of $\Delta ^{-}$ dense in $\Delta ^{-}$. Let $U_k$ be the intersection of $\Delta ^{-}$ with the $(1/k)$-neighborhood of $x_k$. Now we construct the sequence $c_j$ by induction. Suppose that simple homologically independent cycles $c_{1}, c_{2},\ldots , c_{k-1}$ are already constructed and have multipliers $\mu (c_j)$, $|\mu (c_j)|<1$. Put $\Sigma =\bigcup _j c_j\cap S$, $\varepsilon = |\mu (c_{1})\mu (c_{2})\cdots \mu (c_{k-1})|$. Take two disjoint domains $V_{1}, V_{2}\subset U_k$. Due to \hyperref[hutchinson]{Lemma 5}, there exist two contracting compositions $g_{1}:\Delta ^{+}\rightarrow V_{1}$ and $g_{2}:\Delta ^{+}\rightarrow V_{2}$. According to the previous two lemmas, there exists a simple cycle $c_k$ with multiplier less than $\varepsilon $ that intersects $U_k$ but does not visit $\Sigma $. Note that this cycle, as well as all previous ones, projects to the curve of the form $\gamma _{l_{1}}\gamma _{l_{2}}\ldots \gamma _{l_r}$ on the infinite line, and $\gamma _i\cap \gamma _j=\set{O}$ for $i\neq j$. Thus if $c_i\cap c_k\neq \varnothing $ for some $i<k$, then $c_k\cap c_i\cap S\neq \varnothing $, hence $c_k\cap \Sigma \neq \varnothing $ which contradicts the choice of $c_k$. Due to \hyperref[independent-cycles]{Lemma 9}, the cycles $c_{1}, \ldots , c_k$ are homologically independent. \end{proof} Now let us prove the lemmas formulated above. \begin{proof}[{Proof of Lemma 10}] Fix a large number $m$. Due to Brouwer Theorem, for each word $w=w_{1}\ldots w_m$, $w_i\in \set{1, 2}$, the corresponding map \begin{equation*} g_w=g_{w_{1}}\circ \ldots \circ g_{w_m}:D\rightarrow D \end{equation*} has a fixed point. Our goal is to find a word $w$ such that the corresponding periodic orbit will never visit $\Sigma $. Since $g_{1}(D)\cap g_{2}(D)=\varnothing $, the images of $2^m$ maps $g_w$, $|w|=m$, are pairwise disjoint. Hence, given a point $p\in \Sigma $, there is at most one word $w$ such that $g_w(p)=p$. Their cyclic shifts are the only words $w$ such that the corresponding periodic orbit visits $\Sigma $, thus there are at most $|\Sigma |\cdot m$ of them. Clearly, for $m$ large enough we have $|\Sigma |\cdot m<2^m$, hence there exists a periodic orbit of length~$m$ that never visits~$\Sigma $. \end{proof} \begin{proof}[{Proof of Lemma 11}] Consider a~composition $g_w=g_{w_{1}}\circ \ldots \circ g_{w_s}=\mathbf{M}_{i_{1}}\circ \ldots \circ \mathbf{M}_{i_k}$, the corresponding periodic orbit \eqref{periodic} and the corresponding limit cycle $c$. If \eqref{periodic} does not visit the finite set $\Sigma _{1}=\bigcup_{g_i^{(h)}}\left(g_i^{(h)}\right)^{-1}(\Sigma )$, then $c\cap \Sigma =\varnothing $. Subcycles of $c$ correspond to representations $g_w=g^{(t)}\circ g^{(m)}\circ g^{(h)}$ with non-empty $g^{(m)}$ such that $g^{(h)}(p_{0})$ is a fixed point of~$g^{(m)}$. Let us prove that sufficiently long compositions~$g^{(m)}$ correspond to cycles $c'$ that satisfy the assertions of the lemma, and fixed points of~“short” compositions can be avoided by avoiding a~finite set $\Sigma _{2}$. If $g^{(m)}$ is not a subcomposition of one of $f_j$, then it can be represented in the form \begin{equation} g^{(m)}=f_{s_{1}}^{(h)}\circ f_{s_{2}}\circ \ldots \circ f_{s_{k-1}}\circ f_{s_k}^{(t)}. \phantomsection \label{gm} \end{equation} Recall that $f_{s_k}^{(t)} \circ f_{s_{1}}^{(h)}$ contracts in the chart $\zeta $ due to \hyperref[contraction]{\textbf{contraction}} property, thus $(f_{s_{1}}^{(h)})^{-1} \circ g^{(m)} \circ f_{s_{1}}^{(h)}$ contracts, and we have \begin{equation} \mu (c') \leqslant \left(\max_{\zeta \in \Delta ^{+}}f_i'(\zeta )\right)^{k-2}. \phantomsection \label{multiplier} \end{equation} Let $\operatorname{len}(\cdot )$ be the length of a composition of $\mathbf{M}_j$. For sufficiently large $L$, $\operatorname{len}(g^{(m)})\geqslant L$ implies that the subcycle $c'$ corresponding to $g^{(m)}$ satisfies the assertions of the lemma. Indeed, the multiplier of $c'$ can be made arbitrarily small due to \eqref{multiplier}; for any $L > \max (\operatorname{len} g_{1}, \operatorname{len} g_{2})$, the corresponding $c'$ visits $U$ because it contains a point of the form $g_{w_{i+1}}\circ \ldots \circ g_{w_s}(p_{0})\in U$. Now, it is sufficient to avoid fixed points of~“short” compositions $g^{(m)}$, $\operatorname{len}(g^{(m)})<L$. Let us prove that none of the compositions $g^{(m)}$ are identical in $g^{(h)}(\Delta ^{+})$. Suppose the contrary. We have eliminated all identical subcompositions from $f_j$, so $g^{(m)}$ cannot be a~subcomposition of~some~$f_j$. Thus $g^{(m)}$ has the form \eqref{gm}, and \eqref{multiplier} yields that $g^{(m)}$ contracts. Hence $g^{(m)}\neq \id$. Therefore, each composition $g^{(m)}$ has only finitely many fixed points in $g_{w_j}^{(h)}(\Delta ^{+})$, where $g_{w_j}^{(h)}(\Delta ^{+})$ is defined by $g^{(h)}=g_{w_j}^{(h)}\circ g_{w_{j+1}}\circ \ldots \circ g_{w_s}$. In order to guarantee that a subcycle $c'$ corresponds to a long composition $g^{(m)}$, $\operatorname{len} g^{(m)}>L$, it is sufficient to require that the periodic orbit \eqref{periodic} avoids the finite set \begin{equation*} \Sigma _{2}=\set{(g_{w_j}^{(h)})^{-1}\Fix g^{(m)}|\operatorname{len}(g^{(m)})<L}. \end{equation*} The required exceptional set is $\Sigma '=\Sigma _{1}\cup \Sigma _{2}$. \end{proof} \section{Acknowledgements% \label{acknowledgements}% } We proved these results and wrote the first version of this article during our 5-months visit to Mexico (Mexico City, then Cuernavaca). We are very grateful to UNAM (Mexico) and HSE (Moscow) for supporting this visit. Our deep thanks to Laura Ortiz Bobadilla, Ernesto Rosales-González and Alberto Verjovsky, for invitation to Mexico and for fruitful discussions. We are thankful to Arsenij Shcherbakov for useful discussions about technical details of \cite{SRO98}. We are also grateful to Yulij Ilyashenko for permanent encouragement, and to Victor Kleptsyn for interesting discussions.
{ "timestamp": "2015-04-14T02:18:18", "yymm": "1504", "arxiv_id": "1504.03313", "language": "en", "url": "https://arxiv.org/abs/1504.03313" }
\section{\@startsection {section}{1}{\z@ {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex {\normalfont\bf\center }} \renewcommand\subsection{\@startsection {subsection}{1}{\z@ {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex {\normalfont\bf}} \makeatother \setlength{\topmargin}{-0.8cm} \setlength{\oddsidemargin}{0.6cm} \setlength{\evensidemargin}{0.65cm} \setlength{\textheight}{23.5cm} \setlength{\textwidth}{14.5cm} \pagestyle{myheadings} \markboth{\hfill{\sc y.hamana and h.matsumoto}\hfill}{\hfill{\sc hitting time of brownian motion with drift}\hfill} \date{\today} \raggedbottom \newcommand{\ds}{\displaystyle} \newcommand{\bR}{\mathbf{R}} \newcommand{\bN}{\mathbf{N}} \newcommand{\bZ}{\mathbf{Z}} \newcommand{\bC}{\mathbf{C}} \newcommand{\dd}{\mathrm{d}} \newcommand{\e}{\varepsilon} \newcommand{\pa}{\partial} \newcommand{\I}{\operatorname{Id}} \newcommand{\eil}{\overset{(\text{\rm law})}{=}} \newcommand{\wt}{\widetilde} \newcommand{\wh}{\widehat} \newcommand{\res}{\operatorname{Res}} \newcommand{\la}{\langle} \newcommand{\ra}{\rangle} \newcommand{\iti}{\boldsymbol{1}} \newcommand{\sF}{\mathscr{F}} \newtheoremstyle{new-thm} {3pt} {3pt} {\it} {0pt} {\bf} {.} {.5em} {} \newtheoremstyle{new-def} {3pt} {3pt} {\rm} {0pt} {\bf} {.} {.5em} {} \theoremstyle{new-thm} \newtheorem{thm}{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{lemma}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \theoremstyle{new-def} \newtheorem{defn}{Definition}[section] \newtheorem{example}[thm]{Example} \newtheorem{rem}[thm]{Remark} \numberwithin{equation}{section} \numberwithin{thm}{section} \begin{document} \vspace*{1cm} \begin{center} {\Large\bf Hitting times to spheres of Brownian motions\\ with and without drifts} \end{center} \bigskip \begin{center} Yuji Hamana and Hiroyuki Matsumoto \end{center} \bigskip \begin{quote} {\bf Abstract.} Explicit formulae for the densities of the first hitting times to the sphere of Brownian motions with drifts are given. We need to consider the joint distributions of the first hitting times to the sphere and the hitting positions of the standard Brownian motion and explicit expression for their Laplace transforms are given, which are different from the known formulae in the literature and are of independnt interest. 2010 {\it Mathematics Subject Classification}\ : Primary 60J65 \\ {\it keywords}\ : \ Brownian motion, first hitting time, Bessel process \end{quote} \section{Introduction} For $d\geqq2$, we denote by $B=\{B_t\}_{t\geqq0}$ a standard $d$-dimensional Brownian motion starting from a fixed point $x\in\bR^d$ which is defined on a probability space $(\Omega,\sF,P)$. We throughout assume $x\ne0$. Letting $v\in\bR^d$ be a non-zero constant vector, we consider a Brownian motion $B^{(v)}=\{B^{(v)}_t\}_{t\geqq0}$ with drift $v$ given by $B_t^{(v)}=B_t+v t$. It is a very simple fundamental diffusion process, but we sometimes encounter difficulty to obtain explicit formulae on it. In this paper we consider the first hitting times $\sigma$ and $\sigma^{(v)}$ of $B$ and $B^{(v)}$, respectively, to the sphere $S^{d-1}_r$ with radius $r>0$ and centered at the origin. The main purpose is to give an explict expression for the density of $\sigma^{(v)}$. For this we need to give an explicit expression for the joint Laplace transform of the density of $(\sigma,B_\sigma)\in(0,\infty)\times S^{d-1}$. The formula for $(\sigma,B_\sigma)$ obtained in this article is of quite different form from the formulae obtained by Aizenman-Simon \cite{AS} and Wendel \cite{wendel}. The density $p_\nu(t;x)$, $\nu=\frac{d-2}{2}$ being the index, of $\sigma$ has been studied from old times. See \cite{K, PY} and the references therein for the Laplace forms and related topics. Recently, Byczkowski and Ryznar \cite{BR}, Uchiyama \cite{U} and the authors of the present paper \cite{HM-I, HM-T, HM-E} have studied the explicit expressions and the asymptotics of the densities themselves and the tail probabilities. The density for $\sigma^{(v)}$ is expressed in terms of the densities $p_\mu(t;x)$'s (of different dimensions). Moreover, using the previous results for $\sigma$, we show the asymptotics of the tail probabilities for $\sigma^{(v)}$. Our main results are the following. \begin{thm} \label{1t:main} When $d=2,$ the density $p_0^{(v)}(t;x)$ for $\sigma^{(v)}$ is given by \begin{equation} \label{1e:main-1} \begin{split} p_0^{(v)}(t;x) = e^{-\la v,x \ra-\frac{1}{2}|v|^2t} & \Bigl\{ I_0(|v|r) p_0(t;x) \\ & + \sum_{n=1}^\infty n\; C_n^0\Bigl(\frac{\la v,x \ra}{|v|\cdot |x|}\Bigr) I_n(|v|r) \frac{|x|^n}{r^n}\; p_n(t;x) \Bigr\}. \end{split} \end{equation} When $d\geqq3,$ it is given by \begin{equation} \label{1e:main-2} \begin{split} p^{(v)}_\nu(t;x) = & 2^\nu \Gamma(\nu) e^{-\la v,x \ra-\frac{1}{2}|v|^2t} \\ & \times \sum_{n=0}^\infty (\nu+n) C_n^\nu\Bigl(\frac{\la v,x \ra}{|v|\;|x|}\Bigr) I_{\nu+n}(|v|r) \frac{|x|^n}{|v|^\nu r^{\nu+n}} \; p_{\nu+n}(t;x). \end{split} \end{equation} Here $I_\mu\; (\mu\geqq0)$ is the modified Bessel function of the first kind and $C_n^\nu$ is the Gegenbauer polynomial{\rm .} \end{thm} We refer to Magnus-Oberhettinger-Soni \cite{MOS} and Watson \cite{W} about the special functions. We note again that explicit expressions and the asymptotic behavior of $p_\nu(t;x)$ are known. For the asymptotic behavior of the tail probabilities, we show the following. To mention the result, we recall (\cite{BR, HM-I, U}) that \begin{equation*} p_0(t;x)=\frac{L(0)}{t(\log t)^2} (1+o(1)) \quad \text{and} \quad p_\nu(t;x)=\frac{L(\nu)}{t^{\nu+1}}(1+o(1))\quad (\nu>0) \end{equation*} holds as $t\to\infty$, where $L(0)=2\log\frac{|x|}{r}$ and, for $\nu>0$, \begin{equation*} L(\nu)= \frac{r^{2\nu}}{2^\nu \Gamma(\nu)} \Bigl( 1 - \Bigl(\frac{r}{|x|}\Bigr)^{2\nu}\Bigr). \end{equation*} \begin{thm} \label{1t:tail} Assume $|x|>r.$ Then{\rm ,} one has \begin{equation*} P(t<\sigma^{(v)}<\infty) = \frac{2L(0)}{|v|^2} I_0(|v|r) e^{-\la v,x \ra} \frac{e^{-\frac{1}{2}|v|^2t}}{t(\log t)^2} (1+o(1)) \end{equation*} as $t\to\infty$ when $d=2,$ and \begin{equation*} P(t<\sigma^{(v)}<\infty) = \frac{2^{\nu+1}L(\nu)\Gamma(\nu)}{|v|^2} \frac{I_\nu(|v|r)}{(|v|r)^{\nu}} e^{-\la v,x \ra} \frac{e^{-\frac{1}{2}|v|^2t}}{t^{\nu+1}}(1+o(1)) \end{equation*} when $d\geqq3.$ \end{thm} This paper is organized as follows. In the next section we give some estimates for the modified Bessel function and the Gegenbauer polynomial. In Section 3 we present an explicit form for the Laplace transform of $p^{(v)}_\nu(t;x)$ and, admitting it as proved, we give a proof of Theorem \ref{1t:main}. In Section 4 we show how to compute the Laplace form. In the last Section 5 we prove Theorem \ref{1t:tail}. We can apply the results in this paper to a study on the Wiener sausage of the Brownian motion with drift. It will be discussed in a separate paper. \section{Preliminary estimates} In this section we show some estimates for the modified Bessel function $I_\mu$ and for the Gegenbauer polynomial $C_n^\nu$. We firstly show an estimate for $I_\mu$: \begin{equation*} I_\mu(\xi) = \sum_{m=0}^\infty \frac{(\xi/2)^{\mu+2m}}{\Gamma(m+1)\Gamma(m+\nu+1)}. \end{equation*} \begin{lemma} \label{2l:for-i} For $\mu\geqq0$ and $n\geqq1,$ one has \begin{equation} \label{2e:for-i} \xi^{-\mu}I_{\mu+n}(\xi) \leqq \frac{\xi^n}{2^{\mu+n}\Gamma(\mu+n+1)} e^\xi,\quad \xi>0. \end{equation} \end{lemma} \noindent{\it Proof}.\quad Note that $\Gamma(p+q)\geqq \Gamma(p+1) \Gamma(q)$ holds for $p\geqq0$ and $q\geqq1$, which can be seen from \begin{equation*} \frac{\Gamma(p+1)\Gamma(q)}{\Gamma(p+q)} = p B(p,q) \leqq p \int_0^1 x^{p-1} dx = 1, \quad p>0. \end{equation*} Then we have \begin{align*} \xi^{-\mu} I_{\mu+n}(\xi) & \leqq \frac{\xi^n}{2^{\mu+n}} \sum_{n=0}^\infty \frac{(\xi/2)^{2n}}{(\Gamma(m+1))^2\Gamma(\mu+n+1)} \\ & \leqq \frac{\xi^n}{2^{\mu+n}\Gamma(\mu+n+1)} \Bigl( \sum_{m=0}^\infty \frac{(\xi/2)^m}{m!} \Bigr)^2, \end{align*} which shows \eqref{2e:for-i}. \qquad$\square$ \bigskip Next we give an estimate for the Gegenbauer polynomial $C_n^\nu$. When $\nu>0$, it is given by \begin{equation*} C_n^\nu(\xi) = \frac{1}{\Gamma(\nu)} \sum_{m=0}^{[n/2]} (-1)^m \frac{\Gamma(\nu+n-m)}{m! (n-2m)!} (2\xi)^{n-2m}, \end{equation*} which is characterized by the relation \begin{equation*} (1-2t\xi+t^2)^{-\nu} = \sum_{n=0}^\infty C_n^\nu(\xi) t^n. \end{equation*} When $\nu=0$, $C_0^0(\xi)=1$ and, when $n\geqq 1$, $C_n^0$ is given by \begin{equation*} C_n^0(\xi)=\sum_{m=0}^{[n/2]} (-1)^m \frac{\Gamma(n-m)}{\Gamma(m+1)\Gamma(n-2m+1)} (2\xi)^{n-2m}. \end{equation*} \begin{lemma} \label{2l:for-g} For $\alpha\in\bR$ with $|\alpha|\leqq 1,$ $\nu\geqq0$ and $n\geqq1,$ one has \begin{equation} \label{2e:for-g} |C_n^\nu(\alpha)| \leqq \rho_\nu \frac{4^n \Gamma(\nu+n)}{n!}, \end{equation} where $\rho_0=1$ and $\rho_\nu=(\Gamma(\nu))^{-1}$ for $\nu>0.$ \end{lemma} \noindent{\it Proof}.\quad When $\nu=0$, since $C_n^0(\cos\theta)=\frac{2}{n}\cos(n\theta)$, we have \begin{equation*} |C_n^0(\alpha)|\leqq\frac{4^n}{n}=\rho_0\frac{4^n \Gamma(n)}{n!}. \end{equation*} When $\nu>0$, we have \begin{equation*} \frac{|C_n^\nu(\alpha)|}{\Gamma(\nu+n)} \leqq \frac{1}{\Gamma(\nu)} \sum_{m=0}^{[n/2]} \frac{2^{n-2m} \Gamma(\nu+n-m)}{\Gamma(\nu+n)m! (n-2m)!}. \end{equation*} If $1\leqq m \leqq [n/2]$, it holds that \begin{equation*} \begin{split} \frac{\Gamma(\nu+n-m)}{\Gamma(\nu+n) (n-2m)!} & \leqq \frac{1}{(n-1)(n-2)\cdots(n-m)\cdot(n-2m)!} \\ & = \frac{(n-m-1)\cdots(n-(2m-1))}{(n-1)!} \leqq \frac{n^m}{n!}. \end{split} \end{equation*} Hence we get \begin{equation*} \begin{split} \frac{|C_n^\nu(\alpha)|}{\Gamma(\nu+n)} & \leqq \frac{1}{\Gamma(\nu)} \sum_{m=0}^{[n/2]} \frac{1}{m!} \frac{n^m}{n!} 2^{n-2m} \\ & \leqq \frac{2^n}{\Gamma(\nu)n!} \sum_{m=0}^\infty \frac{1}{m!} \Bigl(\frac{n}{4}\Bigr)^m \leqq \frac{2^n}{\Gamma(\nu)n!} e^{\frac{n}{2}} \leqq \frac{4^n}{\Gamma(\nu)n!} \end{split} \end{equation*} because $e^{\frac{1}{4}}\leqq2$. \qquad$\square$ \section{Laplace transforms and proof of Theorem \ref{1t:main}} We first reduce the computation for $\sigma^{(v)}$ to that for the joint distribution of $(\sigma, B_\sigma)$, the first hitting time to the sphere and the hitting position of the standard Brownian motion. By the Cameron-Martin theorem, we easily see \begin{equation*} P(\sigma^{(v)} \leqq t) = e^{-\la v,x \ra-\frac12 |v|^2t} E[ e^{\la v, B_t \ra} \iti_{\{\sigma\leqq t\}}] \end{equation*} and \begin{equation*} E[e^{-\lambda \sigma^{(v)}}] = \lambda e^{-\la v,x \ra} \int_0^\infty e^{-(\lambda+\frac12 |v|^2)t} E[ e^{\la v, B_t \ra} \iti_{\{\sigma\leqq t\}}] dt, \end{equation*} where $E$ denotes the expectation with respect to $P$. Moreover, letting $\sF_t=\sigma\{B_s, s\leqq t\}$, we see by the strong Markov property of Brownian motion \begin{equation*} E[ e^{\la v, B_t \ra} \iti_{\{\sigma\leqq t\}}] = E[ E[ e^{\la v, B_t \ra} | \sF_\sigma] \iti_{\{\sigma\leqq t\}}] = E[ e^{\la v, B_\sigma \ra + \frac12|v|^2(t-\sigma)} \iti_{\{\sigma\leqq t\}}] \end{equation*} and \begin{equation*} E[e^{-\lambda\sigma^{(v)}}]=\lambda e^{-\la v,x \ra} \int_0^\infty e^{-\lambda t} E[ e^{\la v,B_\sigma \ra-\frac12 |v|^2\sigma} \iti_{\{\sigma\leqq t\}}] dt. \end{equation*} \indent From this identity we obtain the following \begin{prop} \label{2p:lap} For any $\lambda>0$, one has \begin{equation} \label{3e:lap} E[e^{-\lambda \sigma^{(v)}}] = e^{-\la v,x \ra} E[e^{\la v, B_\sigma \ra- (\lambda+\frac12 |v|^2)\sigma}]. \end{equation} \end{prop} \noindent{\it Proof}.\quad We have shown \begin{equation*} E[e^{-\lambda \sigma^{(v)}}] = \lambda e^{-\la v,x \ra} \int_0^\infty e^{-\lambda t}dt \int_0^t E[e^{\la v,B_\sigma \ra-\frac12 |v|^2\sigma} |\sigma=s] p_\nu(s;x) ds, \end{equation*} where $p_\nu(s;x)$ is the density of $\sigma$ (see Sect. 1). Changing the order of integrations, we obtain \begin{equation*} \begin{split} E[e^{-\lambda \sigma^{(v)}}] & = \lambda e^{-\la v,x \ra} \int_0^\infty E[e^{\la v,B_\sigma \ra-\frac12 |v|^2\sigma} |\sigma=s] p_\nu(s;x) ds \int_s^\infty e^{-\lambda t}dt \\ & = e^{-\la v,x \ra} E[ e^{\la v,B_\sigma\ra- (\lambda+\frac12 |v|^2)\sigma}]. \qquad \qquad \square \end{split} \end{equation*} \medskip For the right hand side of \eqref{3e:lap}, we show the following explicit expression. Denoting by $K_\mu$ the modified Bessel function of the second kind (the Macdonald function), we define the function $Z_\mu^{(v),\lambda}\ (\mu\geqq 0)$ by \begin{align*} & Z_\mu^{(v),\lambda}(\xi,\eta) = \frac{K_\mu(\xi\sqrt{2\lambda+|v|^2})} {K_\mu(\eta\sqrt{2\lambda+|v|^2})}\qquad \text{if\ $\xi>\eta>0$}\\ \intertext{and} & Z_\mu^{(v),\lambda}(\xi,\eta) = \frac{I_\mu(\xi\sqrt{2\lambda+|v|^2})} {I_\mu(\eta\sqrt{2\lambda+|v|^2})}\qquad \text{if\ $\eta>\xi>0$}. \end{align*} Since $K_\mu$ is decreasing and $I_\mu$ is increasing on $(0,\infty)$, $Z_\mu^{(v),\lambda}\leqq1$. \begin{prop} \label{3p:joint} Let $\lambda>0.$ When $d=2,$ one has \begin{equation*} E[e^{\la v,B_\sigma \ra-\lambda\sigma}] = I_0(|v|r) Z_0^{(0),\lambda}(|x|,r) + \sum_{n=1}^\infty n\; C_n^0(\alpha) I_n(|v|r) Z_n^{(0),\lambda}(|x|,r), \end{equation*} where $\alpha=\frac{\la v,x \ra}{|v|\cdot|x|}.$ When $d\geqq3,$ one has \begin{equation*} E[e^{\la v,B_\sigma \ra-\lambda\sigma}] = 2^\nu \Gamma(\nu) \sum_{n=0}^\infty (\nu+n) C_n^\nu(\alpha) \frac{I_{\nu+n}(|v|r)}{(|v|\cdot|x|)^\nu} Z_{\nu+n}^{(0),\lambda}(|x|,r). \end{equation*} \end{prop} Combining this proposition with \eqref{3e:lap}, we obtain \begin{align*} & E[e^{-\lambda\sigma^{(v)}}] = \int_0^\infty e^{-\lambda t} p^{(v)}(t;x) dt \\ =\ & e^{-\la v,x \ra} \Bigl\{ I_0(|v|r) Z_0^{(v),\lambda}(|x|,r) + \sum_{n=1}^\infty n\; C_n^0(\alpha) I_n(|v|r) Z_n^{(v),\lambda}(|x|,r) \Bigr\} \\ \intertext{when $d=2,$ and} & E[e^{-\lambda\sigma^{(v)}}] = e^{-\la v,x \ra} 2^\nu \Gamma(\nu) \sum_{n=0}^\infty (\nu+n) C_n^\nu(\alpha) \frac{I_{\nu+n}(|v|r)}{(|v|\cdot|x|)^\nu} Z_{\nu+n}^{(v),\lambda}(|x|,r). \end{align*} when $d\geqq3$. We postpone a proof of Propositon \ref{3p:joint} to the next section and give a proof of Theorem \ref{1t:main}. It is well known (cf. \cite{K}) that, the density $p_\mu(s;x)$ of $\sigma_\mu$, the first hitting time of a Bessel process with index $\mu$ starting from $|x|$, is characterized by \begin{equation} \label{3e:lap-known} E[e^{-\lambda \sigma_\mu}] = \int_0^\infty e^{-\lambda s} p_\mu(s;x) ds = \frac{r^{\mu}}{|x|^\mu} Z_\mu^{(0),\lambda}(|x|,r). \end{equation} We use the same notation for the density since our main concern is on the special case where the index $\mu$ is a half integer, and there is no fear of confusion. To prove Theorem \ref{1t:main}, we compute the Laplace transform of the right hand side of \eqref{1e:main-1}, \eqref{1e:main-2} by changing the order of the integrations and the infinite sums. Using the estimates given in the previous section, we have for $\nu\geqq0$ \begin{equation} \label{3e:joint} \begin{split} \sum_{n=1}^\infty & (\nu+n) |C_n^\nu(\alpha)| I_{\nu+n}(|v|r) \frac{|x|^n}{|v|^\nu r^{\nu+n}} \int_0^\infty e^{-(\lambda+\frac{1}{2}|v|^2)t} p_{\nu+n}(t;x) dt \\ & \leqq \frac{\rho_\nu r^\nu}{2^\nu |x|^\nu} e^{|v|r} \sum_{n=1}^\infty \frac{(\nu+n) \Gamma(\nu+n) (2|v|r)^n} {n! \Gamma(\nu+n+1)} Z_{\nu+n}^{(v),\lambda}(|x|,r). \end{split} \end{equation} Since $Z_\mu^{(v),\lambda}\leqq1$, the above is bounded by \begin{equation*} \frac{\rho_\nu r^\nu}{2^\nu|x|^\nu} e^{|v|r} \sum_{n=0}^\infty \frac{(2|v|r)^n}{n!} = \frac{\rho_\nu r^\nu}{2^\nu|x|^\nu} e^{3|v|r}. \end{equation*} \indent Hence, we may apply Fubini's theorem and see, from \eqref{3e:lap-known}, that the Laplace transforms of the right hand sides of \eqref{1e:main-1} and \eqref{1e:main-2} are equal to those of $p^{(v)}_\nu(t;x)$ in both cases. We have now shown Theorem \ref{1t:main}, admitting Proposition \ref{3p:joint} as proved. \section{Proof of Proposition \ref{3p:joint}} In order to prove Proposition \ref{3p:joint}, we use the skew-product representation of Brownian motions. Let $R=\{R_t\}_{t\geqq0}$ be a $d$-dimensional Bessel process (with index $\nu=\frac{d-2}{2}$) and $\theta=\{\theta_t\}_{t\geqq0}$ be a Brownian motion on the unit sphere $S^{d-1}=S_1^{d-1}$ with $\theta_0=\frac{x}{|x|}$, and assume that $R$ and $\theta$ are independent. Recall that, embedding $S^{d-1}$ in $\bR^d$, we can realize $\theta$ as a solution of a stochastic differential equation, which is so-called Stroock's representation of a spherical Brownian motion. Set $S_t=\int_0^t(R_s)^{-2}ds$. Then, $\{R_t\theta_{S_t}\}_{t\geqq0}$ is a $d$-dimensional Brownian motion. Hence, we have \begin{equation*} E[e^{-\lambda\sigma+\la v,B_\sigma \ra}] = \int_0^\infty \int_0^\infty e^{-\lambda t} E_{\frac{x}{|x|}}^\theta[e^{r\la v,\theta_u \ra}] P_{\nu,|x|}(\tau\in dt, S_\tau\in du), \end{equation*} where $E_{\theta_0}^\theta$ denotes the expectation with respect to the probability law of $\theta$ starting from $\theta_0$, $P_{\nu,|x|}$ is the probability law of $\{R_t\}$ and $\tau$ is the first hitting time to $r$ of $\{R_t\}$. It is known (cf. \cite{BS} p.407) that \begin{align*} & E_{\nu,|x|}[ e^{-\alpha\tau-\frac12 \beta^2S_\tau}] = \frac{|x|^{-\nu}K_{\sqrt{\nu^2+\beta^2}}(|x|\sqrt{2\alpha})} {r^{-\nu}K_{\sqrt{\nu^2+\beta^2}}(r\sqrt{2\alpha})} \qquad \text{\rm if}\quad |x|>r \\ \intertext{and} & E_{\nu,|x|}[ e^{-\alpha\tau-\frac12 \beta^2S_\tau}] = \frac{|x|^{-\nu}I_{\sqrt{\nu^2+\beta^2}}(|x|\sqrt{2\alpha})} {r^{-\nu}I_{\sqrt{\nu^2+\beta^2}}(r\sqrt{2\alpha})} \qquad \text{\rm if}\quad |x|<r , \end{align*} where $E_{\nu,|x|}$ is the expectatation with respect to $P_{\nu,|x|}$. We obtain Proposition \ref{3p:joint} if we show the following. We can justify the change of order of the integration and the infinite sum by the same way as \eqref{3e:joint}. \begin{prop} \label{4p:sphere} Let $\xi>0$. Then{\rm ,} when $d=2,$ one has \begin{equation} \label{4e:sphere-1} E_{\theta_0}^\theta[ e^{\xi\la v,\theta_t\ra}] = I_0(|v|r) + \sum_{n=1}^\infty n\; C_n^0(\alpha) e^{-\frac{1}{2}n^2t} I_n(|v|\xi) \end{equation} and{\rm ,} when $d\geqq3,$ \begin{equation} \label{4e:sphere-2} E^\theta_{\theta_0}[e^{\xi\la v,\theta_t\ra}]=2^\nu \Gamma(\nu) \sum_{n=0}^\infty (\nu+n) C_n^\nu(\alpha) e^{-\frac12 n(n+2\nu)t} \frac{I_{\nu+n}(|v|\xi)}{(|v|\xi)^{\nu}}. \end{equation} \end{prop} We see from this proposition that the Gegenbauer polynomial comes into our story through the following formula (cf. \cite[p.227]{MOS}): for $\alpha\in\bR,\xi>0,\mu>0$, \begin{equation} \label{4e:gegen} e^{\alpha\xi}=2^\mu \Gamma(\mu) \sum_{n=0}^\infty (\mu+n) C_n^\mu(\alpha) \xi^{-\mu} I_{\mu+n}(\xi). \end{equation} \indent We first show that $f_\nu(t,\xi)=E^\theta_{\theta_0}[e^{\xi\la v,\theta_t\ra}]$ satisfies \begin{equation} \label{4e:heat} \frac{\pa f_\nu}{\pa t}=-\frac12 \xi^2 \frac{\pa^2f_\nu}{\pa\xi^2} -\frac{d-1}{2}\xi\frac{\pa f_\nu}{\pa\xi}+ \frac12 |v|^2\xi^2 f_\nu, \quad t>0,\ \xi>0, \end{equation} together with the boundary conditions \begin{equation} \label{4e:bdry} f_\nu(0,\xi)=e^{\xi\la v,\theta_0 \ra}, \quad f_\nu(t,0)=1, \quad \frac{\pa f_\nu}{\pa\xi}(t,0)=\la v,\theta_0 \ra e^{-\frac{d-1}{2}t}. \end{equation} \indent For this purpose, we recall Stroock's representation of sperical Brownian motion (cf. \cite{S}). $\theta$ may be realized as a solution of the stochastic differential equation based on a $d$-dimensional Brownian motion $\{w_s=(w_s^1,w_s^2,...,w_s^d)\}_{s\geqq0}$ which is given by \begin{equation*} d\theta_s^i = \sum_{j=1}^d (\delta_{ij}-\theta_s^i \theta_s^j) \circ dw_s^j, \qquad i=1,2,...,d. \end{equation*} Then, from a strihgtforward computation by using It\^o's formula, we can show \eqref{4e:heat}. It is easy to see \eqref{4e:bdry}. For simplicity we set $\beta=d-1$ and consider the function $g_\nu$ given by \begin{equation*} g_\nu(t,\xi)=f_\nu\Bigl(2t,\frac{\xi}{|v|}\Bigr). \end{equation*} Then $g_\nu$ is a smooth function which satisfies \begin{align} & \frac{\pa g_\nu}{\pa t}=-\xi^2 \frac{\pa^2g_\nu}{\pa \xi^2} -\beta \xi \frac{\pa g_\nu}{\pa\xi} + \xi^2g_\nu, \qquad t>0,\ \xi>0, \label{4e:diff-eq} \intertext{and} & g_\nu(0,\xi)=e^{\alpha\xi},\qquad g_\nu(t,0)=1, \qquad \frac{\pa g_\nu}{\pa\xi}(t,0)= \alpha e^{-\beta t}.\label{4e:bdry-cond} \end{align} \indent If $u(t,\xi)=e^{-\lambda t}\phi(\xi)$ satisfies \eqref{4e:diff-eq}, we should have \begin{equation*} \xi^2\phi''(\xi) + \beta\xi\phi'(\xi)-(\xi^2+\lambda)\phi(\xi)=0. \end{equation*} The system of the fundamental solutions of this second order differential equation is given by $\xi^{-\nu}I_{\sqrt{\lambda+\nu^2}}(\xi)$ and $\xi^{-\nu}K_{\sqrt{\lambda+\nu^2}}(\xi)$, where $\nu=\frac{\beta-1}{2}=\frac{d-2}{2}.$ For the function $\phi$ to be smooth at $\xi=0$, we should choose $\xi^{-\nu}I_{\sqrt{\lambda+\nu^2}}(\xi)$. Moreover, $n=\sqrt{\lambda+\nu^2}-\nu$ should be a non-negative integer and $\lambda=n(n+2\nu)$. The following lemma is easily shown and we omit the proof. \begin{lemma} \label{4l:bessel} {\rm (1)}\ The function $\varphi_{\nu,n}(\xi)=\xi^{-\nu}I_{\nu+n}(\xi)$ satisfies \begin{equation*} \xi^2\varphi_{\nu,n}''(\xi)+\beta \xi \varphi_{\nu,n}'(\xi) - \xi^2 \varphi_{\nu,n}(\xi) = n(n+2\nu) \varphi_{\nu,n}(\xi), \qquad \xi>0. \end{equation*} {\rm (2)}\ One has \begin{equation*} \varphi_{\nu,1}'(0)=\frac{1}{2^{\nu+1}\Gamma(\nu+2)} \qquad \text{and} \qquad \varphi_{\nu,n}'(0)=0 \quad (n\ne1). \end{equation*} \end{lemma} The following proposition immediately implies Proposition \ref{4p:sphere} \begin{prop} \label{4p:expl-rep} When $d=2,$ one has \begin{equation} \label{4e:expl-rep1} g_0(t,\xi)=I_0(\xi) + \sum_{n=1}^\infty n\; C_n^0(\alpha) e^{-n^2t} I_n(\xi),\quad t\geqq0, \ \xi\geqq0 \end{equation} and{\rm ,} when $d\geqq3,$ \begin{equation} \label{4e:expl-rep2} g_\nu(t,\xi)= 2^\nu \Gamma(\nu) \sum_{n=0}^\infty (\nu+n) C_n^\nu(\alpha) e^{-n(n+2\nu)t} \xi^{-\nu} I_{\nu+n}(\xi). \end{equation} \end{prop} \noindent{\it Proof}.\quad First of all we note that the sum on the right hand sides of \eqref{4e:expl-rep1} and \eqref{4e:expl-rep2} are absolutely convergent at each $(t,\xi)$, which is seen from \eqref{2e:for-i} and \eqref{2e:for-g} in a similar way to \eqref{3e:joint}. Letting $\varphi_{\nu,n}$ be the function defined in Lemma \ref{4l:bessel}, we set \begin{equation*} h_{\nu}(t;\xi) = \sum_{n=1}^\infty (\nu+n) C_n^\nu(\alpha) e^{-n(n+2\nu)t} \varphi_{\nu,n}(\xi). \end{equation*} We have shown that the sum on the right hand side is absolutely convergent. Moreover, noting \begin{equation*} \varphi'_{\nu,n}(\xi)=\frac{n}{\xi}\varphi_{\nu,n}(\xi) +\varphi_{\nu,n+1}(\xi), \end{equation*} we see, in a similar way to \eqref{3e:joint}, that \begin{equation*} \sum_{n=1}^\infty (\nu+n) C_n^\nu(\alpha) e^{-n(n+2\nu)t} \varphi_{\nu,n}'(\xi) \ \ \text{\rm and} \ \ \sum_{n=1}^\infty (\nu+n) C_n^\nu(\alpha) e^{-n(n+2\nu)t} \varphi_{\nu,n}''(\xi) \end{equation*} converge uniformly on compact sets in $\xi\in(0,\infty)$ and are equal to $\frac{\pa}{\pa\xi}h_\nu(t,\xi)$ and $\frac{\pa^2}{\pa\xi^2}h_\nu(t,\xi)$, respectively. Next we look at $h_\nu(t,\xi)$ as a function in $t>0$. By \eqref{2e:for-i} and \eqref{2e:for-g} we have \begin{align*} & \sum_{n=1}^\infty (\nu+n) |C_n^\nu(\alpha)| n(n+2\nu) e^{-n(n+2\nu)t} \varphi_{\nu,n}(\xi) \\ \leqq\ & \rho_\nu \sum_{n=1}^\infty (\nu+n) \frac{4^n\Gamma(\nu+n)}{n!} n(n+2\nu) \frac{\xi^n}{2^{\nu+n}\Gamma(\nu+n+1)} e^\xi \\ =\ & \frac{\rho_\nu}{2^\nu} e^\xi \Bigl\{ \sum_{n=1}^\infty \frac{(n-1)(2\xi)^n}{(n-1)!} + \sum_{n=1}^\infty \frac{(2\nu+1)(2\xi)^n}{(n-1)!} \Bigr\} \\ =\ & \frac{\rho_\nu \xi^2}{2^{\nu-2}} e^{3\xi} + \frac{\rho_\nu(2\nu+1)\xi}{2^{\nu-1}} e^{3\xi} \end{align*} and we may differetiate term by term to obtain \begin{equation*} \frac{\pa}{\pa t}h_\nu(t,\xi) = -\sum_{n=1}^\infty (\nu+n) C_n^\nu(\alpha) e^{-n(n+2\nu)t} n(n+2\nu) \varphi_{\nu,n}(\xi). \end{equation*} \indent Combining the identities above, we see that the function $g_\nu(t,\xi)$ given by \eqref{4e:expl-rep1} and \eqref{4e:expl-rep2} satisfies \eqref{4e:diff-eq}. The boundary condition \eqref{4e:bdry-cond} in the case of $d\geqq3$ may be checked by \eqref{4e:gegen} and the fact, $C_0^\nu(\alpha)=1$ and $C_1^\nu(\alpha)=2\nu\alpha$\ (cf. \cite[p.218]{MOS}). For a check when $d=2$, we rewrite \eqref{4e:gegen} as \begin{align*} e^{\alpha\xi} & = 2^\mu \Gamma(\mu+1) \xi^{-\mu} I_\mu(\xi) + 2^\mu \sum_{n=1}^\infty \Gamma(\mu) (\mu+n) C_n^\mu(\alpha) \xi^{-\mu} I_{\mu+n}(\xi) \\ & = 2^\mu \Gamma(\mu+1) \Bigl\{ \xi^{-\mu} I_\mu(\xi) + \sum_{n=1}^\infty (\mu+n) \frac{C_n^\mu(\alpha)}{\mu} \xi^{-\mu} I_{\mu+n}(\xi) \Bigr\}, \end{align*} which holds for any $\mu>0$. Note $\frac{C_n^\mu(\alpha)}{\mu}\to C_n^0(\alpha)$ as $\mu\downarrow0$. Then, by using \eqref{2e:for-i} and \eqref{2e:for-g}, we can show that we may apply the dominated convergence theorem and obtain \begin{equation*} e^{\alpha\xi} = I_0(\xi) + \sum_{n=1}^\infty n\; C_n^0(\alpha) I_n(\xi), \end{equation*} which is exactly $g_0(0,\xi)=e^{\alpha\xi}.$ Another boundary condition $g_0(t,0)=1$ follows from $I_0(0)=1$ and the other one $\frac{\pa g_\nu}{\pa\xi}(t,0)=\alpha e^{-t}$ does from the formula $C_1^0(\alpha)=\alpha$. We have now completed the proof of Proposition \ref{4p:expl-rep}. \begin{rem} The function $z^{-\nu}I_{\nu+n}(z)$ may be regarded as a holomorphic function on $\bC$. Hence, by using \eqref{2e:for-i} and \eqref{2e:for-g}, we can show that the functions \begin{equation*} E[e^{z\la v,\theta_t \ra/|v|}] \quad \text{and} \quad \sum_{n=1}^\infty (\nu+n) C_n^\nu(z) e^{-\frac{1}{2}n(n+2\nu)t} z^{-\nu} I_{\nu+z}(z) \end{equation*} are holomorphic in $z\in\bC$. From this we obtain the Fourier-Laplace transform of the joint distribution of $(\sigma,B_\sigma)$. For example, when $d\geqq3$, we can show \begin{equation*} E[e^{i\la v,B_\sigma\ra - \lambda\sigma}] = 2^\nu \Gamma(\nu) \sum_{n=0}^\infty i^n(\nu+n) C_n^\nu(\alpha) \frac{J_{\nu+n}(|v|r)}{(|v|\cdot|x|)^\nu} Z^{(0),\lambda}_{\nu+n}(|x|,r), \end{equation*} where $J_\mu$ is the usual Bessel function. \end{rem} \section{Proof of Theorem \ref{1t:tail}} We set \begin{equation*} f^{(v)}(t;x)=P(t<\sigma^{(v)}<\infty)= \int_t^\infty p^{(v)}(s;x)ds. \end{equation*} In order to apply Theorem \ref{1t:main}, we need to change the order of the integration and the summation. For this purpose, we note from \eqref{2e:for-i} and \eqref{2e:for-g} \begin{align*} & \sum_{n=1}^\infty (\nu+n) |C_n^\nu(\alpha)| \frac{I_{\nu+n}(|v|r)|x|^n}{|v|^\nu r^{\nu+n}} \int_t^\infty e^{-\frac12 |v|^2s} p_{\nu+n}(s;x)ds \\ \leqq\ & \sum_{n=1}^\infty (\nu+n) \rho_\nu \frac{4^n \Gamma(\nu+n)}{n!} \frac{(|v|r)^ne^{|v|r}}{2^{\nu+n}\Gamma(\nu+n+1)} \frac{|x|^n}{r^n} e^{-\frac{1}{2}|v|^2t} \\ \leqq\ & \frac{\rho_\nu}{2^\nu} e^{|v|r} \sum_{n=1}^\infty \frac{(2|v|\cdot|x|)^n}{n!} \leqq \frac{\rho_\nu}{2^\nu} e^{3|v|r}. \end{align*} Then we can apply Fubini's theorem and obtain from Theorem \ref{1t:main} \begin{equation*} f^{(v)}(t;x)=e^{-\la v,x\ra}I_0(|v|r) \int_t^\infty e^{-\frac{1}{2}|v|^2s} p_0(s;x)ds + f_0(t) \end{equation*} when $d=2$, and when $d\geqq3$ \begin{equation*} f^{(v)}(t,x)=e^{-\la v,x \ra} 2^\nu \Gamma(\nu+1) \frac{I_\nu(|v|r)}{|v|^\nu r^\nu} \int_t^\infty e^{-\frac{1}{2}|v|^2s} p_\nu(s;x)ds + f_\nu(t), \end{equation*} where the second terms on the right hand sides are given by \begin{equation*} \begin{split} f_\nu(t) = & e^{-\la v,x \ra} 2^\nu \Gamma(\nu+1) \\ & \times \sum_{n=1}^\infty (\nu+n) C_n^\nu(\alpha) \frac{|x|^n I_{\nu+n}(|v|r)}{|v|^\nu r^{\nu+n}} \int_t^\infty e^{-\frac{1}{2}|v|^2s}p_{\nu+n}(s;x)ds. \end{split} \end{equation*} \indent At first we prove the theorem when $\nu>0$. We have \begin{equation*} p_\nu(t;x) = \frac{L(\nu)}{t^{\nu+1}} (1+o(1)) \end{equation*} and, by L'Hospital's rule, \begin{equation} \label{4e:tail} \int_t^\infty e^{-\frac{1}{2}|v|^2s} p_\nu(s;x)ds = \frac{2L(\nu)}{|v|^2} t^{-\nu-1} e^{-\frac{1}{2}|v|^2t} (1+o(1)). \end{equation} Note that this identity holds for any $\nu>0$. In order to show that $f_\nu(t)$ is negligible when $\nu>0$, we use the argument in Sect.2 of \cite{HM-E}. Then we obtain \begin{equation*} \begin{split} \int_t^\infty e^{-\frac{1}{2}|v|^2s}p_{\nu+n}(s;x)ds & \leqq e^{-\frac{1}{2}|v|^2t} P(t<\sigma^{(\nu+n)}<\infty) \\ & \leqq e^{-\frac{1}{2}|v|^2t} E_{\nu+n,|x|}[(R_t)^{-2(\nu+n)}], \end{split} \end{equation*} where $E_{\nu+n,|x|}$ denotes the expectation with respect to the probability law of the Bessel process $\{R_t\}_{t\geqq0}$ with index $\nu+n$ and starting from $|x|$. Using the explicit expression of the transition density of the Bessel process, we obtain \begin{equation*} \begin{split} E_{\nu,|x|}[(R_t)^{-2(\nu+n)}] & = \frac{1}{(2t)^{\nu+n}} e^{-\frac{|x|^2}{2t}} \sum_{m=0}^\infty \frac{|x|^m}{\Gamma(\nu+n+m+1)(2t)^m} \\ & \leqq \frac{1}{(2t)^{\nu+n}} e^{-\frac{|x|^2}{2t}} \sum_{m=0}^\infty \frac{|x|^m}{\Gamma(\nu+n+1)\Gamma(m+1)(2t)^m} \\ & = \frac{1}{\Gamma(\nu+n+1)(2t)^{\nu+n}}. \end{split} \end{equation*} Hence, by using \eqref{2e:for-i} and \eqref{2e:for-g} again, we obtain for $n\geqq1$ and $t\geqq1$ \begin{equation*} \begin{split} & t^{\nu+1} e^{\frac12 |v|^2t} (\nu+n) |C_n^\nu(\alpha)| \frac{|x|^n I_{\nu+n}(|v|r)}{|v|^\nu r^{\nu+n}} \int_t^\infty e^{-\frac{1}{2}|v|^2s} p_{\nu+n}(s;x)ds \\ & \leqq \frac{\rho_\nu}{4^\nu} e^{|v|r} \frac{(|v|\cdot|x|)^n}{n! \Gamma(\nu+n+1)}. \end{split} \end{equation*} The quantity on the right hand side is independent of $t\geqq1$ and is summable in $n$. Therefore, since \eqref{4e:tail} holds when we replace $\nu$ by $\nu+n$, we can apply the dominated convergence theorem and obtain \begin{equation*} \lim_{t\to\infty} t^{\nu+1}e^{\frac12 |v|^2t} f_\nu(t)=0. \end{equation*} \indent When $d=2$, we have \begin{equation*} p_0(t;x)=\frac{L(0)}{t(\log t)^2}(1+o(1)), \qquad L(0)=2\log\frac{|x|}{r}, \end{equation*} and \begin{equation*} \int_t^\infty e^{-\frac{1}{2}|v|^2s} p_0(s;x)ds = \frac{L(0)}{t(\log t)^2} e^{-\frac{1}{2}|v|^2t} (1+o(1)). \end{equation*} By the same way as in the case of $\nu>0$, we can show by the dominated convergence theorem \begin{equation*} \lim_{t\to\infty} t(\log t)^2 e^{\frac{1}{2}|v|^2t}f_0(t)=0 \end{equation*} and we obtain the assertion of Theorem \ref{1t:tail} also in this case. \begin{rem} The estimate for the tail probability $P(t<\sigma<\infty)$ has been firstly given in Byczkowski and Ryznar \cite{BR}. We need an explicit upper bound here. \end{rem} \section*{Acknowledgements} The authors thank Professor Tatsuo Iguchi for his valuable suggestions.
{ "timestamp": "2015-04-14T02:10:29", "yymm": "1504", "arxiv_id": "1504.03043", "language": "en", "url": "https://arxiv.org/abs/1504.03043" }
\section{Introduction\label{secIntro}} Meander curves in the plane emerge as shooting curves of parabolic PDEs in one space dimension \cite{FiedlerRocha1999-SturmPermutations}, as trajectories of Cartesian billiards \cite{FiedlerCastaneda2012-Meander}, and as representations of elements of Temperley-Lieb algebras \cite{DiFrancescoGolinelliGuitter1997-Meander}. In general, we regard closed meanders as the pattern created by one or several disjoint closed Jordan curves in the plane as they intersect a given line transversely. The pattern of intersections remains the same when we deform the curves into collections of arcs with endpoints on the horizontal axis, see figure \ref{figMeanderHomotopy}. They induce a permutation on the set of intersection points with the horizontal axis. Starting from the permutation, the inverse problem raises two questions: First, is a given permutation a meander permutation, i.e.~is it generated by a meander? Second, if yes, is it generated by a single curve, or more generally, of how many curves is the meander composed of? We study the subclass of \emph{bi-rainbow meanders}, which are composed of several non-branched families of nested arcs above and below the axis, see figure \ref{figRainbowMeander} and definition \ref{defRainbowMeander}. This subclass also appears naturally as representations of seaweed algebras \cite{CollMagnantWang2012-Meander}. For less than four upper families, the number of curves of the meander is given as the greatest common divisor of expressions in the sizes of the families, \[ \begin{array}{lcl} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1) &=& \ensuremath{\alpha}_1, \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2) &=& \mathop{\gcd}\nolimits(\ensuremath{\alpha}_1, \ensuremath{\alpha}_2), \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ensuremath{\alpha}_3) &=& \mathop{\gcd}\nolimits(\ensuremath{\alpha}_1+\ensuremath{\alpha}_2,\ensuremath{\alpha}_2+\ensuremath{\alpha}_3), \end{array} \] see (\ref{eqGcdOfRainbows}) and \cite{FiedlerCastaneda2012-Meander}. It is tempting to conjecture the existence of similar ``closed'' expressions for general bi-rainbow meanders. However, in section \ref{secNoGcd}, we prove that there are severe obstructions: \begin{trivlist} \item[\hskip \labelsep{\bfseries Theorem\ \ref{thmNoGcd}}] \itshape Let $n\ge4$ be given. Then there do not exist homogeneous polynomials $f_1,f_2 \in \mathds{Z}[x_1,\ldots,x_n]$ of arbitrary degree with integer coefficients such that the number of connected components $\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ of every bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ is given by the $\mathop{\gcd}\nolimits(f_1(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n),f_2(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n))$. In other words: to every choice of polynomials $f_1, f_2$, we find a counterexample. \end{trivlist} After this partly negative result, we could look for more complicated formulae. Instead of that, we shall shift our viewpoint a little bit. We argue, that the $\mathop{\gcd}\nolimits$ is an abbreviation for the Euclidean algorithm rather than an ``explicit'' expression. The Euclidean algorithm has logarithmic complexity: It requires $\ensuremath{\mathop{\mathcal{O}\strut}}(\log\ensuremath{\alpha}_1 + \log\ensuremath{\alpha}_2)$ steps to determine $\mathop{\gcd}\nolimits(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2)$. Conversely, any formula for the number $\ensuremath{\mathop{\mathcal{Z}\strut}}$ of connected components also provides a formula for the $\mathop{\gcd}\nolimits$. Therefore, whatever ``closed'' formula we find, it cannot be of smaller complexity. In section \ref{secAlgorithms} we provide algorithms which calculate the number of connected component by nose retractions, which we introduce in section \ref{secNoseRetractions}. They have a structure very similar to the Euclidean algorithm. They also have the same logarithmic complexity. Although the search for exact $\mathop{\gcd}\nolimits$ expressions might be futile, we still find a $\mathop{\gcd}\nolimits$-like interpretation of the number of connected components of meanders. \pdfbookmark[2]{Acknowledgements}{secAcknowledgements} \subsection*{Acknowledgements} Both authors have been supported by the Collaborative Research Centre 647 ``Space--Time--Matter'' of the German Research Foundation (DFG). We thank Bernold Fiedler, Pablo Casta\~neda, and Vincent Trageser for useful discussions and encouragement. \section{Meander curves\label{secMeanders}} \begin{figure} \centering \begin{tikzpicture}[scale=0.75*\hsize/11cm, label/.style={ inner sep=2pt }] \draw [thin] (0,0) -- (11,0); \drawMeanderPath{1}{10,-7,8,-9,6,-1}; \drawMeanderPath{2}{5,-4,3,-2}; \node [label, above left ] at (1,0) {$1$}; \node [label, above left ] at (2,0) {$2$}; \node [label, above left ] at (3,0) {$3$}; \node [label, above right] at (4,0) {$4$}; \node [label, above right] at (5,0) {$5$}; \node [label, above left ] at (6,0) {$6$}; \node [label, above left ] at (7,0) {$7$}; \node [label, above right] at (8,0) {$8$}; \node [label, above right] at (9,0) {$9$}; \node [label, above right] at (10,0) {$10$}; \end{tikzpicture} \caption{\label{figMeanderHomotopy} A general meander as a pair of arc collections.} \end{figure} We start with one or several disjoint closed Jordan curves in the plane, intersected by a given line --- without loss of generality the horizontal axis. By homotopic deformations, without introduction or removal of intersections, the curve can be represented by collections of arcs above and below the horizontal axis. Both collections have the same number $\ensuremath{\alpha}$ of disjoint arcs and hit the axis in the same $2\ensuremath{\alpha}$ points $\{1,\ldots,2\ensuremath{\alpha}\}$, see figure \ref{figMeanderHomotopy}. There are now several possibilities to represent a meander. \subsection{...as a pair of products of disjoint transpositions\label{secMeanderTranspositions}} Each arc connects two points on the axis and thus defines a transposition. The arc collections above and below the axis can both be represented as products of the disjoint transpositions given by their arcs. The example in figure \ref{figMeanderHomotopy} reads \begin{equation}\label{eqTranspositionExample} \begin{array}{rcl} \pi^\arcsabove &=& (1,10)(2,5)(3,4)(6,9)(7,8), \\ \pi^\arcsbelow &=& (1,6)(2,3)(4,5)(7,10)(8,9), \end{array} \end{equation} in the common cycle representation of permutations. Such a product of disjoint transpositions represents a disjoint arc collection if, and only if, no pair of transpositions is interlaced, i.e. \[ \mbox{whenever} \quad a<b<\pi^{\arcsabove/\arcsbelow}(a) \quad \mbox{then} \quad a<\pi^{\arcsabove/\arcsbelow}(b)<\pi^{\arcsabove/\arcsbelow}(a), \qquad \pi^{\arcsabove/\arcsbelow} \in \{\pi^\arcsabove, \pi^\arcsbelow\}. \] Necessarily, both $\pi^\arcsabove$ and $\pi^\arcsbelow$ must interchange odd and even numbers, \begin{equation}\label{eqTranspositionOddEven} \begin{array}{rcl} \pi^{\arcsabove/\arcsbelow}|_{\{\mathrm{odd}\}} = \{\mathrm{even}\}, \qquad \pi^{\arcsabove/\arcsbelow}|_{\{\mathrm{even}\}} = \{\mathrm{odd}\}. \end{array} \end{equation} \begin{prop}Let \begin{equation}\label{eqCyclicPermutation} \sigma \;:=\; (1,2,3,4,\ldots,2\ensuremath{\alpha}) \end{equation} be the cyclic permutation of the $2\ensuremath{\alpha}$ intersection points with the axis. Then a product $\pi^{\arcsabove/\arcsbelow}$ of $\ensuremath{\alpha}$ disjoint transpositions represents a disjoint arc collection if, and only if, the permutation $\pi^{\arcsabove/\arcsbelow} \sigma$ has exactly $\ensuremath{\alpha}+1$ (disjoint) cycles. \end{prop} \begin{proof} Start with a disjoint arc collection. Take the graph with the $v:=2\ensuremath{\alpha}$ vertices $\{1,\ldots,2\ensuremath{\alpha}\}$. The $e:=3\ensuremath{\alpha}-1$ edges are given by the $\ensuremath{\alpha}$ arcs of the collection together with the $2\ensuremath{\alpha}-1$ edges $\{(1,2),\ldots,(2\ensuremath{\alpha}-1,2\ensuremath{\alpha})\}$ on the axis. Each cycle of $\pi^{\arcsabove/\arcsbelow} \sigma$ corresponds to the oriented boundary of a face. By Euler's formula, the number of faces $f$ of a (planar) graph is given by $f=e-v+2$. Therefore, there must be exactly $\ensuremath{\alpha}+1$ cycles. Now start with an arbitrary arc collection. If $\pi^{\arcsabove/\arcsbelow} \sigma$ has a fixed point $\ell$ then this fixed point belongs to an arc connecting the neighbouring points $\ell, \ell+1$. (The points $2\ensuremath{\alpha}$ and $1$ are also neighbours in this sense.) This arc can be removed, decreasing $\ensuremath{\alpha}$ and the number of cycles by one and keeping the structure of the remaining arcs (including their intersections) and of the remaining cycles of $\pi^{\arcsabove/\arcsbelow} \sigma$. If $\pi^{\arcsabove/\arcsbelow} \sigma$ does not have a fixed point then all cycles have length at least 2. Therefore there can be at most $\ensuremath{\alpha}$ cycles, and the arc collection is not disjoint due to the first argument. For disjoint initial arc collections $\pi^{\arcsabove/\arcsbelow}$, the iterative removal of fixed points of $\pi^{\arcsabove/\arcsbelow} \sigma$ (alias arcs between neighbours) finishes at the trivial arc collection $\tilde{\pi}^{\arcsabove/\arcsbelow} = (12)$ of a single arc. Indeed, $\tilde{\pi}^{\arcsabove/\arcsbelow} \sigma = (1)(2)$ has two cycles and therefore $\pi^{\arcsabove/\arcsbelow} \sigma$ has $\ensuremath{\alpha}+1$ cycles. For non-disjoint initial arc collections $\pi^{\arcsabove/\arcsbelow}$, the iterative removal of fixed points of $\pi^{\arcsabove/\arcsbelow} \sigma$ must stop earlier: at an arc collection $\tilde{\pi}^{\arcsabove/\arcsbelow}$ such that $\tilde{\pi}^{\arcsabove/\arcsbelow} \sigma$ has no fixed points. The number of cycles of $\pi^{\arcsabove/\arcsbelow} \sigma$ is therefore less than or equal to $\ensuremath{\alpha}$. \end{proof} \subsection{...as a single meander permutation\label{secMeanderPermutation}} The two products of transpositions which we used in the previous section both interchange odd and even numbers, see (\ref{eqTranspositionOddEven}). Therefore, we can combine them into a single permutation \begin{equation}\label{eqMenaderPermutation} \pi^\arcsboth(k) \;=\;\left\{\begin{array}{ll} \pi^\arcsabove(k) &,\quad k \mbox{ odd},\\ \pi^\arcsbelow(k) &,\quad k \mbox{ even}. \end{array}\right. \end{equation} The cycles of this permutation directly correspond to the closed curves of the meander. We find: \begin{prop} The number of connected components, i.e.~the number of closed Jordan curves, of a meander equals the number of cycles of the associated meander permutation $\pi^\arcsboth$. Additionally, the number of cycles of $\pi^\arcsabove\pi^\arcsbelow$ is twice the number of connected components. \end{prop} The second part follows from the fact that $\pi^\arcsabove\pi^\arcsbelow$ leaves the sets of even/odd numbers invariant and equals $(\pi^\arcsboth)^2$ on the even and its inverse on the odd numbers. For the example in figure \ref{figMeanderHomotopy}, we get \begin{equation}\label{eqPermutationExample} \begin{array}{rcl} \pi^\arcsboth &=& (1,10,7,8,9,6)(2,3,4,5), \\ \pi^\arcsabove\pi^\arcsbelow &=& (1,9,7)(3,5)(2,4)(6,10,8). \end{array} \end{equation} \subsection{...as a shooting permutation\label{secMeanderShooting}} \begin{figure} \centering \begin{tikzpicture}[scale=0.75*\hsize/15cm, label/.style={ inner sep=2pt }] \draw [thin] (0,0) -- (15,0); \begin{scope}[shift={(0,0.5)}] \drawMeanderArc{12}{ 1} \drawMeanderArc{ 5}{ 2} \drawMeanderArc{ 4}{ 3} \drawMeanderArc{11}{ 6} \drawMeanderArc{10}{ 7} \drawMeanderArc{ 9}{ 8} \drawMeanderArc{14}{13} \end{scope} \begin{scope}[shift={(0,-0.5)}] \drawMeanderArc{ 1}{14} \drawMeanderArc{ 2}{ 3} \drawMeanderArc{ 4}{13} \drawMeanderArc{ 5}{ 6} \drawMeanderArc{ 7}{12} \drawMeanderArc{ 8}{11} \drawMeanderArc{ 9}{10} \end{scope} \foreach \slX/\slS in {1/1,2/10,3/11,4/12,5/9,6/8,7/3,8/6,9/5,10/4,11/7,12/2,13/13,14/14} { \draw [thick] (\slX,-0.5) -- (\slX,0.5); \node [label, above left ] at (\slX,0) {$\slX$}; \node [label, below left ] at (\slX,0) {$\slS$}; } \end{tikzpicture} \caption{\label{figShootingCurve} Shooting permutation of a connected meander.} \end{figure} In \cite{Rocha1991-SturmAttractor, FiedlerRocha1999-SturmPermutations} meander curves are found as shooting curves of scalar reaction-advection-diffusion equations, \begin{equation}\label{eqSturmPDE} u_t \;=\; u_{xx} + f(x,u,u_x), \end{equation} in one space dimension, $x\in[0,L]$. They are used to describe the global attractors of these systems. Indeed, the $u$-axis $\{u_x|_{x=0}=0\}$, corresponding to a Neumann boundary condition on the left boundary, is propagated by \[ 0 \;=\; u_{xx} + f(x,u,u_x) \] to a curve in the $(u,u_x)$-plane at the right boundary $x=L$. In particular, intersections of this curve with the horizontal axis yield stationary solutions of (\ref{eqSturmPDE}) with Neumann boundary conditions. To facilitate its application in this context, the meander is described by a permutation $\pi$ such that $(k,\pi(k))$ are the right and left boundary values of the stationary solutions of the PDE. In fact, the meander is connected by construction, i.e.~consists of only one Jordan curve. It is originally open, going to $\pm\infty$ for large $|u|$, but can be artificially closed. The permutation used in \cite{Rocha1991-SturmAttractor} yields \begin{equation}\label{eqMenaderShootingPermutation} \pi\left( (\pi^\arcsboth)^{k}(1) \right) \;=\; k+1, \qquad \pi^\arcsboth \;=\; \pi^{-1} \sigma \pi, \end{equation} with the cyclic permutation $\sigma$ as in (\ref{eqCyclicPermutation}). The shooting permutation $\pi$ maps the enumeration of intersections along the horizontal axis onto the enumeration along the shooting curve, see figure \ref{figShootingCurve} for an illustration of an artificially closed shooting curve and \cite{FiedlerRocha1999-SturmPermutations, FiedlerRocha2009-SturmAttractors1} for recent results on attractors of (\ref{eqSturmPDE}). \subsection{...as a (condensed) bracket expression\label{secMeanderBracket}} When we replace each arc by a pair of brackets, with the opening bracket at the left end and the closing bracket at the right end of the arc, we find corresponding balanced bracket expressions. The example in figure \ref{figMeanderHomotopy} is represented by \begin{equation}\label{eqBracketExample} \frac{\mathds{\bigg[\,\Big[\;[\;]\;\Big]\,\Big[\;[\;]\;\Big]\,\bigg]}} {\Big[\;[\;]\;[\;]\;\Big]\,\Big[\;[\;]\;\Big]}. \end{equation} Each bracket expression can be condensed to a tuple of pairs of positive integers \begin{equation}\label{eqCondensedBracketExpression} ((\ensuremath{\alpha}_1^[,\ensuremath{\alpha}_1^]),\ldots,(\ensuremath{\alpha}_n^[,\ensuremath{\alpha}_n^])), \end{equation} with $\ensuremath{\alpha}_k^[$ representing consecutive opening brackets alias left ends of arcs and $\ensuremath{\alpha}_k^]$ representing consecutive closing brackets alias right ends of arcs. Zero entries could be allowed but can always be removed. The example in figure \ref{figMeanderHomotopy} then reads \begin{equation}\label{eqCondensedBracketExample} \frac{((3,2),(2,3))}{((2,1),(1,2),(2,2))}. \end{equation} On the other hand, each bracket expression represents a disjoint arc collection provided \begin{equation}\label{eqCondensedBracketExpressionConditions} \forall k \;\; \sum_{\ell=1}^k \ensuremath{\alpha}_\ell^[ \ge \sum_{\ell=1}^k \ensuremath{\alpha}_\ell^] \qquad\mbox{and}\qquad \sum_{\ell=1}^n \ensuremath{\alpha}_\ell^[ = \sum_{\ell=1}^n \ensuremath{\alpha}_\ell^] = \ensuremath{\alpha}. \end{equation} Indeed, the arcs given by matching brackets are automatically disjoint. Note that this representation as condensed bracket expression is particularly useful for arc collections which contain large families of non-branched nested arcs. \subsection{...as a cleaved rainbow meander} \begin{figure} \centering \begin{tikzpicture}[scale=0.99*\hsize/43cm] \fill [color=background] (0.7,0) -- (10.3,0) arc (0:-180:4.8 and 2.4); \fill [color=background] (10.7,0) -- (20.3,0) arc (0:180:4.8 and 2.4); \draw [background, line width=5pt, ->] (5.5,0) arc (180:360:5 and 4); \drawMeanderPath{1}{10,-7,8,-9,6,-1}; \drawMeanderPath{2}{5,-4,3,-2}; \draw [thin] (0,0) -- (11,0); \begin{scope}[shift={(22,0)}] \drawMeanderPath{1}{10,-11,14,-7,8,-13,12,-9,6,-15,20,-1}; \drawMeanderPath{2}{5,-16,17,-4,3,-18,19,-2}; \draw [thin] (0,0) -- (21,0); \end{scope} \end{tikzpicture} \caption{\label{figMeanderFlip} Flip of the lower arc collection of a meander.} \end{figure} We are interested in the number of connected components, i.e.~closed curves, of the meander. This number remains the same if we ``simplify'' the lower arc collection of the meander by flipping it to the upper part. More precisely, we rotate the lower arc collection around a point on the horizontal axis to the right of the meander, see figure \ref{figMeanderFlip}. This operation doubles the number of intersection points with the horizontal axis but replaces the lower arc collection by a single non-branched family of nested arcs --- a \emph{rainbow family}. In \cite{DiFrancescoGolinelliGuitter1997-Meander}, a meander is called a \emph{rainbow meander} if the lower arcs form a single rainbow family, i.e.~are all nested. A meander is called \emph{cleaved} if none of the upper arcs connects a point $1 \le \ell \le \alpha$ on the left half to a point $\alpha < \tilde\ell \le 2\alpha$ on the right half of the horizontal axis, that is if the upper arc collection is split at the midpoint. The flip, described above, then results in a cleaved rainbow meander. Without loss of generality, from now on, we assume that all meanders are rainbow meanders, i.e.~have a single rainbow family as its lower arc collection. The representations of permutations, discussed above, take the respective forms: \begin{equation}\label{eqFlippedRepresentations} \begin{array}{rcl} \pi^\arcsbelow &=& (1,2\ensuremath{\alpha})(2,2\ensuremath{\alpha}-1)\cdots(\ensuremath{\alpha},\ensuremath{\alpha}+1),\\ \pi^\arcsboth(k) &=& \left\{\begin{array}{ll} \pi^\arcsabove(k), & k \mbox{ odd},\\ 2\ensuremath{\alpha}-k+1, \quad & k \mbox{ even}. \end{array}\right. \end{array} \end{equation} As condensed bracket expression, the lower arc collection has the form $((\ensuremath{\alpha},\ensuremath{\alpha}))$ and can be omitted. If necessary, we apply the flip. The condensed bracket expression of the new upper arc collection is the old one continued by the reflected old lower expression. Specifically, for the example in figures \ref{figMeanderHomotopy}, \ref{figMeanderFlip}, we obtain \begin{equation}\label{eqBracketFlipExample} ((3,2),(2,3),(2,2),(2,1),(1,2)), \end{equation} see also the former representations (\ref{eqTranspositionExample}, \ref{eqBracketExample}, \ref{eqCondensedBracketExample}). \begin{defn}[Meander]\label{defMeander} We identify a meander with the condensed bracket expression of its upper arc collection (after the flip, for non-rainbow meanders) and use the notation \begin{equation}\label{eqMeanderDef} \ensuremath{\mathop{\mathcal{M}\strut}} \;=\; \ensuremath{\mathop{\mathcal{M}\strut}}((\ensuremath{\alpha}_1^[,\ensuremath{\alpha}_1^]),\ldots,(\ensuremath{\alpha}_n^[,\ensuremath{\alpha}_n^])), \end{equation} satisfying (\ref{eqCondensedBracketExpressionConditions}). \end{defn} Note that a given $n$-tuple (\ref{eqMeanderDef}) of pairs of positive integers represents a flipped meander if, and only if, it is cleaved: \begin{equation}\label{eqCleavedMeander} \sum_{\ell=1}^k \ensuremath{\alpha}_\ell^[ \;=\; \sum_{\ell=1}^k \ensuremath{\alpha}_\ell^] \;=\; \sum_{\ell=k+1}^n \ensuremath{\alpha}_\ell^[ \;=\; \sum_{\ell=k+1}^n \ensuremath{\alpha}_\ell^] \;=\; \ensuremath{\alpha}/2, \end{equation} for an appropriate $k$. Otherwise, the inverse flip would create a meander curve with ``overhanging'' arcs from the upper to the lower side of the axis. Such meanders can be interpreted as the intersection pattern of closed Jordan curves with a half line instead of a line. See again \cite{DiFrancescoGolinelliGuitter1997-Meander}, where this viewpoint is further developed. \subsection{...as an element of a Temperley-Lieb algebra} \begin{figure} \centering \begin{tikzpicture}[scale=0.8*\hsize/20cm, yscale=0.5]\small \begin{scope}[ shift={(0,0)} ] \node [left] at (-2,-4.5) {{\boldmath$e_0\!=\!1$:\strut}}; \draw [thin] (0,0.5) -- ++(0,-10) (3,0.5)-- ++(0,-10); \draw [thick] \foreach \slY in {0,-1,-4,-5,-8,-9} { (0,\slY) -- ++(3,0) }; \node [left] at (0, 0) {$1$}; \node [left] at (0,-1) {$2$}; \node [left] at (0,-8) {$\alpha\!-\!1$}; \node [left] at (0,-9) {$\alpha$}; \node at (1.5,-2.5) {$\vdots$\strut}; \node at (1.5,-6.5) {$\vdots$\strut}; \end{scope} \begin{scope}[ shift={(10,0)} ] \node [left] at (-2,-4.5) {{\boldmath$e_\ell$:\strut}}; \draw [thin] (0,0.5) -- ++(0,-10) (3,0.5) -- ++(0,-10); \draw [thick] \foreach \slY in {0,-1,-8,-9} { (0,\slY) -- ++(3,0) } (0,-5) -- ++( 1,0) arc(-90:90:0.2 and 0.5) -- ++(-1,0) (3,-5) -- ++(-1,0) arc(270:90:0.2 and 0.5) -- ++( 1,0); \node [left] at (0, 0) {$1$}; \node [left] at (0,-1) {$2$}; \node [left] at (0,-4) {$\ell$}; \node [left] at (0,-5) {$\ell\!+\!1$}; \node [left] at (0,-8) {$\alpha\!-\!1$}; \node [left] at (0,-9) {$\alpha$}; \node at (1.5,-2.5) {$\vdots$\strut}; \node at (1.5,-6.5) {$\vdots$\strut}; \end{scope} \end{tikzpicture} \caption{\label{figTLgenerators} Generators of the Temperley-Lieb algebra $TL_{\alpha}(q)$ as strand diagrams.} \end{figure} The multiplicative generators $e_0=1,e_1,\ldots,e_{\alpha-1}$ of a Temperley-Lieb algebra $TL_{\alpha}(q)$ \cite{TemperleyLieb1971-TemperleyLieb} obey the relations \begin{equation}\label{eqTLreleation} \begin{array}{rcll} e_\ell^2 &=& q e_\ell, &(a)\\ e_\ell e_k &=& e_k e_\ell, \qquad \mbox{if } |k-\ell| \ge 2, \quad &(b)\\ e_\ell e_{\ell\pm1} e_\ell &=& e_\ell. &(c) \end{array} \end{equation} They can be visualized as strand diagrams, see figure \ref{figTLgenerators}. Then, the strand diagram of a general product $e_{\ell_1}\cdots e_{\ell_n}$ is given as the concatenation of the individual strand diagrams of $e_{\ell_1},\ldots,e_{\ell_n}$. The properties (\ref{eqTLreleation}) allow isotopic transformations of the strand diagrams. Possible islands, i.e.~closed Jordan curves in the strand diagram, can be removed and then appear as a pre-factor $q$ due to (\ref{eqTLreleation}a). Relations (\ref{eqTLreleation}) can be used to define a basis of reduced elements written as pure products $e_{\ell_1}\cdots e_{\ell_n}$ without islands. \begin{figure} \centering \begin{tikzpicture}[scale=0.8*\hsize/20cm, yscale=0.5]\small \foreach \slX/\slE in {0/3,1/2,2/4,3/7,4/6,5/8,6/1,7/3,8/5,9/7,10/9,11/2,12/4,13/8} { \draw [thick] (\slX ,-\slE) arc (-90:90:0.2 and 0.5); \draw [thick] (\slX+1,-\slE) arc (270:90:0.2 and 0.5); \foreach \slY in {2,3,...,\slE} { \ifnum\slY>\slE\else\draw [thick] (\slX,2-\slY) -- ++(1,0);\fi } \foreach \slY in {8,7,...,\slE} { \ifnum\slY<\slE\else\draw [thick] (\slX,-1-\slY) -- ++(1,0);\fi } \draw [thin, dotted] (\slX,0.5) -- ++(0,-10); \node [above] at (\slX.5,0) {$e_{\slE}$}; } \draw [thin ] (0,0.5) -- ++(0,-10.2) -- ++(14,0) -- ++(0,10.2); \foreach \slY in {0,...,9} { \draw [thick, dashed] (15,-\slY) arc (90:-90:3-\slY*0.3 and 7-\slY*0.7) -- ++(-16,0) arc (270:90:3-\slY*0.3 and 7-\slY*0.7) -- ++(0.4,0); } \foreach \slN in {1,...,10} { \node at (0,1-\slN) [left] {\slN}; } \foreach \slN in {11,...,20} { \node at (14,\slN-20) [right] {\slN}; } \end{tikzpicture} \caption{\label{figTemperleyLieb} Meander as the closure of an element of a Temperley-Lieb algebra.} \end{figure} A reduced element $e_{\ell_1}\cdots e_{\ell_n}$ becomes a rainbow meander when we connect the left and right vertical boundaries of the strand diagram by a rainbow family. This closure is illustrated in figure \ref{figTemperleyLieb}, where we again obtain the meander example (\ref{eqBracketFlipExample}) of figure \ref{figMeanderFlip}. The horizontal line of the meander corresponds to the left and right boundaries of the strand diagram of the Temperley-Lieb element, glued at their bottom ends. The trace $\mathrm{tr}(e)$ is defined as a linear function on $TL_\alpha(q)$. It plays a crucial role in defining further operators on the Temperley-Lieb algebra. On products $e = e_{\ell_1}\cdots e_{\ell_n}$ the trace is given by \[ \mathrm{tr}(e) \,:=\; q^{\ensuremath{\mathop{\mathcal{Z}\strut}}(e)}, \] where $\ensuremath{\mathop{\mathcal{Z}\strut}}(e)$ is the number of connected components of the strand diagram with identified endpoints of the same height in the left and right boundary. Without islands, this coincides with the number of Jordan curves in the associated meander. The ring element $q$ is the parameter of the Temperley-Lieb algebra. See \cite{DiFrancescoGolinelliGuitter1997-Meander} for further background on this correspondence. \subsection{...as a Cartesian billiard\label{secBilliard}} \begin{figure} \centering \begin{tikzpicture}[scale=0.45*\hsize/6cm]\small \draw [thin] (0,0)--(0,3)--(2,3)--(2,5)--(5,5)--(5,3)--(3,3)--(3,1)--(2,1)--(2,0)--(0,0); \draw [thick] (0,0.5)--(4.5,5)--(5,4.5)--(3.5,3)--(2,4.5)--(2.5,5)--(4.5,3)--(5,3.5) --(3.5,5)--(2,3.5)--(3,2.5)--(0.5,0)--(0,0.5); \draw [thick, dashed] (0,1.5)--(1.5,3)--(3,1.5)--(2.5,1)--(0.5,3)--(0,2.5)--(2,0.5)--(1.5,0) --(0,1.5); \node at (0,0.5) [left ] {1}; \node at (0,1.5) [left ] {2}; \node at (0,2.5) [left ] {3}; \node at (0.5,3) [above] {4}; \node at (1.5,3) [above] {5}; \node at (2,3.5) [left ] {6}; \node at (2,4.5) [left ] {7}; \node at (2.5,5) [above] {8}; \node at (3.5,5) [above] {9}; \node at (4.5,5) [above] {10}; \node at (5,4.5) [right] {11}; \node at (5,3.5) [right] {12}; \node at (4.5,3) [below] {13}; \node at (3.5,3) [below] {14}; \node at (3,2.5) [right] {15}; \node at (3,1.5) [right] {16}; \node at (2.5,1) [below] {17}; \node at (2,0.5) [right] {18}; \node at (1.5,0) [below] {19}; \node at (0.5,0) [below] {20}; \end{tikzpicture} \caption{\label{figBilliard} Meander as trajectories of a Cartesian billiard.} \end{figure} \ignore \begin{figure} \centering \begin{tikzpicture}[scale=0.6*\hsize/20cm]\small \draw [thin] ( 0, 0)--( 6, 6)--(10, 2)--(14, 6)--(20, 0) -- (16,-4)--(12, 0)--( 8,-4)--( 6,-2)--( 4,-4)--( 0, 0); \draw [thick] ( 1, 1)--(19, 1)--(19,-1)--(13,-1)--(13, 5)--(15, 5) -- (15,-3)--(17,-3)--(17, 3)--(11, 3)--(11,-1)--( 1,-1)--( 1, 1); \draw [thick, dashed] ( 3, 3)--( 9, 3)--( 9,-3)--( 7,-3) --( 7, 5)--( 5, 5)--( 5,-3)--( 3,-3)--( 3, 3); \node at ( 1, 1) [above] {1\strut}; \node at ( 3, 3) [above] {2\strut}; \node at ( 5, 5) [above] {3\strut}; \node at ( 7, 5) [above] {4\strut}; \node at ( 9, 3) [above] {5\strut}; \node at (11, 3) [above] {6\strut}; \node at (13, 5) [above] {7\strut}; \node at (15, 5) [above] {8\strut}; \node at (17, 3) [above] {9\strut}; \node at (19, 1) [above] {10\strut}; \node at (19,-1) [below] {11\strut}; \node at (17,-3) [below] {12\strut}; \node at (15,-3) [below] {13\strut}; \node at (13,-1) [below] {14\strut}; \node at (11,-1) [below] {15\strut}; \node at ( 9,-3) [below] {16\strut}; \node at ( 7,-3) [below] {17\strut}; \node at ( 5,-3) [below] {18\strut}; \node at ( 3,-3) [below] {19\strut}; \node at ( 1,-1) [below] {20\strut}; \end{tikzpicture} \caption{\label{figBilliard} Meander as trajectories of a Cartesian billiard.} \end{figure} A Cartesian billiard is played on a compact region B in the plane. The boundary of B consists of horizontal and vertical connections of corner points on the integer lattice $\mathds{Z}\times\mathds{Z}$. The billiard trajectories are piecewise linear flights on the diagonal grid $\{(x,y)\,|\,x\pm y\in\mathds{Z}+\frac{1}{2}\}$ and hit the boundary polygon in half-integer midpoints $\mathds{Z}\times(\mathds{Z}+\frac{1}{2})\;\cup\;(\mathds{Z}+\frac{1}{2})\times\mathds{Z}$ with the standard reflection rule. See figure \ref{figBilliard} for an illustration. In \cite{FiedlerCastaneda2012-Meander} the close relation of Cartesian billiards and meanders has been studied. If the boundary of the billiard region is a single curve without self intersections (or, more generally, of self intersection only at integer lattice points~--- removable by making the corners of the boundary polygon round) then the billiard trajectories correspond to meander curves. Indeed, we take any consecutive enumeration of the half integer midpoints along the billiard boundary. They represent the intersection points of the meander with the horizontal line. The two families of parallel pieces of the billiard trajectories represent, respectively, the upper and lower arcs of the meander. In particular, the closed trajectories of the Cartesian billiard are mapped onto the closed Jordan curves of the meander. Conversely, a cleaved rainbow meander $\ensuremath{\mathop{\mathcal{M}\strut}}((\ensuremath{\alpha}_1^[,\ensuremath{\alpha}_1^]),\ldots,(\ensuremath{\alpha}_n^[,\ensuremath{\alpha}_n^]))$ can be easily represented by a Cartesian billiard. Indeed, we construct the billiard boundary by starting at the origin and attaching a horizontal or vertical unit interval for each of the $2\alpha$ upper brackets of our meander representation: On the first half, i.e.~for the first $\alpha$ brackets, we go up for opening brackets and right for closing brackets. Due to condition (\ref{eqCleavedMeander}), we arrive at the point $(\alpha/2,\alpha/2)$, and stay above the diagonal $x=y$. On the second half, i.e.~for the last $\alpha$ brackets, we go down for opening brackets and left for closing brackets. We stay below the diagonal $x=y$ and arrive at the origin. (The only possible self intersections are touching points on the diagonal.) See figure \ref{figBilliard} for the representation of the meander example (\ref{eqBracketFlipExample}) of figure \ref{figMeanderFlip}. Without the cleavage (\ref{eqCleavedMeander}), the construction has an additional twist. For pairs of matching brackets on the same side of the midpoint, we do the same as before. For pairs of matching brackets on opposite sides of the midpoint, we switch the rule for the bracket closer to the midpoint. (We must exclude the case of brackets of the same distance to the midpoint, which create a circle.) If the opening bracket is closer to the midpoint, we go right for the opening and left for the matching closing bracket, switching the former rule for opening brackets of the first half. If the closing bracket is closer to the midpoint, we go up for the opening and down for the closing bracket, switching the former rule for closing brackets of the second half. This results in a closed billiard boundary without self intersections (except, possibly, integer-lattice touching points which can be removed by making the corners round), provided the original meander is circle-free, i.e.~has no closed curve consisting of only one upper and one lower arc. See \cite{FiedlerCastaneda2012-Meander} for a complete proof. \subsection{Bi-rainbow meanders} \begin{figure} \centering \begin{tikzpicture}[scale=0.75*\hsize/29cm] \draw [thin] (0,0) -- (29,0); \drawMeanderArc{10}{1} \drawMeanderArc{9}{2} \drawMeanderArc{6}{5} \node at (5.5,1) {$\vdots$\strut}; \node [right] at (5.5,1) {$\ensuremath{\alpha}_1$\strut}; \drawMeanderArc{28}{17} \drawMeanderArc{27}{18} \drawMeanderArc{23}{22} \node at (22.5,1.25) {$\vdots$\strut}; \node [right] at (22.5,1.25) {$\ensuremath{\alpha}_n$\strut}; \node at (13.5,1) {\scalebox{2}{$\cdots$}}; \drawMeanderArc{1}{28} \drawMeanderArc{2}{27} \drawMeanderArc{13}{16} \drawMeanderArc{14}{15} \node at (14.5,-3.5) {\scalebox{2}{$\vdots$\strut}}; \node [right] at (14.5,-3.5) {$\displaystyle\;\ensuremath{\alpha} = \sum_1^n \ensuremath{\alpha}_\ell$}; \end{tikzpicture} \caption{\label{figRainbowMeander} General bi-rainbow meander.} \end{figure} We have already called a single non-branched family of nested arcs a \emph{rainbow family}, and a meander with a single lower rainbow family a rainbow meander. If a meander consists only of rainbow families, that is if also the upper arc collection consists only of non-branched families of nested arcs, then we call the meander a bi-rainbow meander, see figure \ref{figRainbowMeander}. \begin{defn}[Bi-rainbow meander]\label{defRainbowMeander} A bi-rainbow meander is a meander \begin{equation}\label{eqRainbowMeanderDef} \ensuremath{\mathop{\mathcal{RM}\strut}} \;=\; \ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n) \;:=\; \ensuremath{\mathop{\mathcal{M}\strut}}((\ensuremath{\alpha}_1,\ensuremath{\alpha}_1),\ldots,(\ensuremath{\alpha}_n,\ensuremath{\alpha}_n)), \end{equation} consisting of $\ensuremath{\alpha}=\sum_{\ell=1}^n \ensuremath{\alpha}_\ell$ upper arcs in $n$ rainbow families and one lower rainbow family of $\ensuremath{\alpha}$ nested arcs. \end{defn} Bi-rainbow meanders~--- or rather their collapsed variants introduced in section \ref{secCollapse}~--- represent the structure of seaweed algebras \cite{DergachevKirillov2000-SeaweedAlgebras, CollMagnantWang2012-Meander}. Here, the number of connected components is related to the index of the associated seaweed algebra. In \cite{FiedlerCastaneda2012-Meander, CollMagnantWang2012-Meander}, the question is raised, how to compute the number of connected components, i.e.~closed curves, \begin{equation}\label{eqRainbowComponentsDef} \ensuremath{\mathop{\mathcal{Z}\strut}} \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathcal{RM}\strut}}) \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n), \end{equation} of a bi-rainbow meander. In fact the easy expressions \[ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2) = \mathop{\gcd}\nolimits(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2), \qquad \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ensuremath{\alpha}_3) = \mathop{\gcd}\nolimits(\ensuremath{\alpha}_1+\ensuremath{\alpha}_2,\ensuremath{\alpha}_2+\ensuremath{\alpha}_3), \] see (\ref{eqGcdOfRainbows}), in terms of the greatest common divisors provoked the call for a general ``closed'' formula. This has also been the initial purpose of our investigation. \section{Collapsed meanders} \label{secCollapse} In this section, we introduce the \emph{collapse} of a meander. We start with a bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}} \;=\; \ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$, drawn as arc collections in the plane, see figure \ref{figRainbowMeander}. Above and below the horizontal axis, the meander splits the half plane into connected components. Coming from infinity, we colour each second component black: If a path in the half plane from infinity into the component crosses an odd number of arcs, then we colour this component. The coloured components hit the horizontal axis in the intervals $[2\ell-1, 2\ell]$, $\ell > 1$. In particular, the coloured components above and below the horizontal axis match. Furthermore, each arc bounds exactly one coloured component. There are two types of coloured components. Most coloured components are ``thickened arcs'' bounded by two (neighbouring) arcs of the same rainbow family and two intervals on the axis. The only exceptions are the innermost components of rainbow families with an odd number of arcs: they are half disks bounded by an arc and an interval on the axis. See figure \ref{figCollapsedRainbow} for an illustration. \begin{figure} \centering \includegraphics[width=\textwidth]{FigCollapsedRainbow} \caption{\label{figCollapsedRainbow} Collapse of the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(4,5,3,4,5)$. From left to right: $\ensuremath{\mathop{\mathcal{RM}\strut}}$, coloured domains, and $\ensuremath{\mathop{\mathcal{CRM}\strut}}$. The collapsed bi-rainbow meander consists of one path, one cycle, and an isolated point (counted as a second path).} \end{figure} \ignore \begin{figure} \centering \begin{tikzpicture}[scale=0.75*\hsize/43mm] \draw [thin] (0,0) -- (4.3,0); \drawMeanderRainbow{1.4}{0.1}{14} \drawMeanderRainbow{1.8}{1.5}{ 4} \drawMeanderRainbow{2.4}{1.9}{ 6} \drawMeanderRainbow{3.4}{2.5}{10} \drawMeanderRainbow{4.2}{3.5}{ 8} \drawMeanderRainbow{0.1}{4.2}{42} \end{tikzpicture} \begin{tikzpicture}[scale=0.75*\hsize/43mm] \draw [thick, fill=magenta!70!white] (2.1,0) \meanderPath{2.2,-2.1}; \draw [thick, fill=background] (1.9,0) \meanderPath{2.4,-1.9} (2.0,0) \meanderPath{-2.3,2.0}; \draw [thick, fill=magenta!70!white] (0.7,0) \meanderPath{0.8,-3.5,4.2,-0.1,1.4,-2.9,3.0,-1.3,0.2,-4.1,3.6,-0.7}; \draw [thick, fill=background] (0.3,0) \meanderPath{1.2,-3.1,2.8,-1.5,1.8,-2.5,3.4,-0.9,0.6,-3.7,4.0,-0.3} (0.4,0) \meanderPath{-3.9,3.8,-0.5,1.0,-3.3,2.6,-1.7,1.6,-2.7,3.2,-1.1,0.4}; \draw [thin] (0,0) -- (4.3,0); \end{tikzpicture} \begin{tikzpicture}[scale=0.75*\hsize/43mm, thick/.style={line width=0.7mm}] \draw [thin] (0,0) -- (4.3,0); \draw [magenta!70!black, thick] (2.15,0) circle (0.014); \draw [magenta!70!black, thick] (0.75,0) \meanderPath{-3.55,4.15,-0.15,1.35,-2.95}; \draw [thick] (1.95,0) \meanderPath{2.35,-1.95}; \draw [thick] (0.35,0) \meanderPath{ 1.15,-3.15,2.75,-1.55,1.75,-2.55,3.35,-0.95,0.55,-3.75,3.95,-0.35}; \end{tikzpicture} \caption{\label{figCollapsedRainbow} Collapse of the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(7,2,3,5,4)$. From top to bottom: $\ensuremath{\mathop{\mathcal{RM}\strut}}$, coloured domains, and $\ensuremath{\mathop{\mathcal{CRM}\strut}}$. The collapsed bi-rainbow meander consists of one path, two cycles, and an isolated point (counted as a second path).} \end{figure} \begin{defn}[Collapsed bi-rainbow meander] The collapsed bi-rainbow meander, denoted by $\ensuremath{\mathop{\mathcal{CRM}\strut}} \;=\; \ensuremath{\mathop{\mathcal{CRM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$, arises when we collapse pairs of arcs of the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ to single arcs, that is when we collapse each coloured component, described above, into an arc or a point. The value $\ensuremath{\alpha}_\ell$ is the number of arcs in the $\ell$-th upper family of $\ensuremath{\mathop{\mathcal{RM}\strut}}$ and the number of intersections with the axis in the $\ell$-th upper family of $\ensuremath{\mathop{\mathcal{CRM}\strut}}$. The collapsed bi-rainbow meander is again composed of several rainbow arc collections above and a single rainbow arc collection below the axis. However, if $\ensuremath{\alpha}_\ell$ is odd, then the innermost ``arc'' of this upper rainbow collection is a single point, which we call \emph{semi-isolated}. Similarly, if $\ensuremath{\alpha}=\sum_{\ell=1}^n \ensuremath{\alpha}_\ell$ is odd then the innermost ``arc'' of the lower rainbow collection is a semi-isolated point. \end{defn} Combining the arc collections of $\ensuremath{\mathop{\mathcal{CRM}\strut}}$ above and below the axis, we find again Jordan curves. These curves can either be closed \emph{cycles} or open \emph{paths} ending in semi-isolated points. If $\ensuremath{\alpha} = 2\sum_{\ell=1}^{m-1} \ensuremath{\alpha}_\ell + \ensuremath{\alpha}_m$ is odd, then the lower semi-isolated point coincides with the upper semi-isolated point of the $m$-th rainbow family and becomes an isolated point of the collapsed bi-rainbow meander. We consider such an isolated point to be a path. \begin{thm}\label{thmCollapsedRainbowComponents} The number $\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ of connected components (Jordan curves) of the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ equals the sum of the number of paths and twice the number of cycles of the collapsed bi-rainbow meander $\ensuremath{\mathop{\mathcal{CRM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$: \begin{equation}\label{eqCollapsedRainbowComponents} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathcal{RM}\strut}}) \;=\; \pathComponents(\ensuremath{\mathop{\mathcal{CRM}\strut}}) + 2 \cycleComponents(\ensuremath{\mathop{\mathcal{CRM}\strut}}). \end{equation} \end{thm} \begin{proof} We reverse the collapse from $\ensuremath{\mathop{\mathcal{RM}\strut}}$ to $\ensuremath{\mathop{\mathcal{CRM}\strut}}$. This replaces the curves of $\ensuremath{\mathop{\mathcal{CRM}\strut}}$ by ``thick'' curves which are non-intersecting domains in the plain. The boundary curves of these domains are the Jordan curves of the original bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}$. A thickened path is a simply-connected domain, its boundary a single Jordan curve. A thickened cycle is a deformed ring domain, its boundary consists of two Jordan curves. \end{proof} Note in particular the special case of an isolated point of $\ensuremath{\mathop{\mathcal{CRM}\strut}}$. Its ``thick'' counterpart is a disk bounded by a single Jordan curve. Indeed, an isolated point is created by the innermost arc of an upper family matching the innermost arc of the lower family and thus forming a Jordan curve. Let us count again the number of paths of the collapsed bi-rainbow meander. Each path has two endpoints. These endpoints must be semi-isolated points of the upper or lower arc collections. Semi-isolated points are created by the innermost arcs of odd rainbow families. We find: \begin{cor}\label{thmRainbowComponentsParity} The parity of the number $\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ of connected components (Jordan curves) of the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ is given as half the number of odd rainbow families: \begin{equation}\label{eqRainbowComponentsParity} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathcal{RM}\strut}}) \;\equiv\; \pathComponents(\ensuremath{\mathop{\mathcal{CRM}\strut}}) \quad \pmod 2, \end{equation} where $2\pathComponents$ is the number of odd components of $(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n,\ensuremath{\alpha})$, $\ensuremath{\alpha}=\sum_{\ell=1}^n \ensuremath{\alpha}_\ell$. \end{cor} Note that $\ensuremath{\alpha}$ is odd if, and only if, the number of odd entries among $(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ is odd. Thus, the number of odd components of $(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n,\ensuremath{\alpha})$ is always even. \begin{cor}\label{thmConnectedRainbowParity} In particular, a connected bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ (given by a single Jordan curve) must have exactly one or two odd entries among $(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$. \end{cor} \begin{figure} \centering \begin{tikzpicture}[scale=0.99*\hsize/43cm] \draw [thick, fill=background] (1,0) \meanderPath{10,-11,14,-7,8,-13,12,-9,6,-15,20,-1} (2,0) \meanderPath{-19,18,-3,4,-17,16,-5,2}; \draw [thin] (0,0) -- (21,0); \begin{scope}[shift={(22,0)}, thick/.style={line width=0.7mm}] \draw [thick] (7.5,0) \meanderPath{-13.5,11.5,-9.5,1.5,-19.5,15.5,-5.5} -- (5.5,2.0); \draw [thick] \meanderArc{3.5}{17.5} -- (17.5,1.0); \draw [thin] (0,0) -- (21,0); \end{scope} \end{tikzpicture} \caption{\label{figCollapsedMeander} General collapsed meander with branched curves.} \end{figure} General meanders can also be collapsed in a similar fashion. The resulting curves, however, will in general be branched. See figure \ref{figCollapsedMeander} for the collapse of our example (\ref{eqBracketFlipExample}). The connected components of the collapsed meander must then be counted by the number of components into which the plane is split by the branched curve. We obtain a result similar to theorem \ref{thmCollapsedRainbowComponents}: \begin{thm}\label{thmCollapsedGeneralComponents} The number $\ensuremath{\mathop{\mathcal{Z}\strut}}((\ensuremath{\alpha}_1^[,\ensuremath{\alpha}_1^]),\ldots,(\ensuremath{\alpha}_n^[,\ensuremath{\alpha}_n^]))$ of connected components (Jordan curves) of the general meander $\ensuremath{\mathop{\mathcal{M}\strut}}((\ensuremath{\alpha}_1^[,\ensuremath{\alpha}_1^]),\ldots,(\ensuremath{\alpha}_n^[,\ensuremath{\alpha}_n^]))$ equals the number of connected components of the collapsed meander $\ensuremath{\mathop{\mathcal{CM}\strut}}$, counted by their multiplicity. Here, the multiplicity of a (possibly branched) curve of $\ensuremath{\mathop{\mathcal{CM}\strut}}$ is given by the number of connected components of its complement in the plane. \end{thm} A similar construction is used in \cite{CautosJacksonn2003-TemperleyLieb} to relate meanders and their Temperley-Lieb counterparts to planar partitions. In fact, its inverse is used to represent a planar partitions by a Temperley-Lieb algebra. Theorem \ref{thmCollapsedGeneralComponents} is found in the form \[ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathcal{M}\strut}}) \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathcal{CM}\strut}}) + \ensuremath{\mathop{\mathcal{Z}\strut}}(\mathds{R}^2\setminus\ensuremath{\mathop{\mathcal{CM}\strut}}) - 1. \] In other words, the number of Jordan curves of the meander equals the number of coloured and bounded uncoloured regions. \section{Nose retractions of bi-rainbow meanders\label{secNoseRetractions}} Let $\ensuremath{\mathop{\mathcal{RM}\strut}}=(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ be again an arbitrary bi-rainbow meander with $n$ rainbow families of given numbers of arcs above and one rainbow of $\ensuremath{\alpha}=\sum_{k=1}^n \ensuremath{\alpha}_k$ arcs below the horizontal line, see figure \ref{figRainbowMeander}. In this section, we discuss deformations of the meander $\ensuremath{\mathop{\mathcal{RM}\strut}}$ which result again in a bi-rainbow meander with the same number $\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathcal{RM}\strut}}) = \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ of connected components. The general idea is to retract parts of upper rainbow families, which we call \emph{noses}, through the horizontal axis. Note, how the retraction of a single arc through the horizontal axis removes two intersection points. In the PDE application of section \ref{secMeanderShooting}, this corresponds to a saddle-node bifurcation in which the associated stationary solutions of the PDE disappear. \begin{figure} \centering \begin{tikzpicture}[scale=0.75*\hsize/27cm] \node at (0,0) {\makebox(0,0)[br]{a)\strut}}; \fill [background] (11,0) \meanderPath{16,21,-6,1,-26,11}; \draw [thin] (0,0) -- (27,0); \drawMeanderPath{16}{21,-6,1,-26,11} \drawMeanderArc{4}{3} \drawMeanderArc{19}{18} \draw [line width=1mm, ->] (3.5,0) \meanderPath{-23.5,13.5}; \node [above] at ( 3.5,1.25) {$\ensuremath{\alpha}_1$}; \node [above] at (18.5,3.75) {$\ensuremath{\alpha}_{n} > 2\ensuremath{\alpha}_1$}; \node at ( 3.5, 0.75 ) {$\vdots$\strut}; \node at (18.5, 0.75 ) {$\vdots$\strut}; \node at (18.5, 1.875) {$\vdots$\strut}; \node at (18.5, 3.125) {$\vdots$\strut}; \node at (13.5,-5.625) {$\vdots$\strut}; \node at (13.5,-4.375) {$\vdots$\strut}; \node at (13.5,-1.875) {\scalebox{2}{$\vdots$\strut}}; \node at ( 8.5, 1 ) {\scalebox{2}{$\cdots$\strut}}; \end{tikzpicture} \begin{tikzpicture}[scale=0.75*\hsize/23cm] \node at (0,0) {\makebox(0,0)[br]{b)\strut}}; \fill [background] (11,0) \meanderPath{16,17,-6,1,-22,11}; \draw [thin] (0,0) -- (23,0); \drawMeanderPath{16}{17,-6,1,-22,11} \drawMeanderArc{4}{3} \draw [line width=1mm, ->] (3.5,0) \meanderPath{-19.5,13.5}; \node [above] at ( 3.5,1.25) {$\ensuremath{\alpha}_1$}; \node [above] at (16.5,2.75) {$\ensuremath{\alpha}_{n} = 2\ensuremath{\alpha}_1$}; \node at ( 3.5, 0.75 ) {$\vdots$\strut}; \node at (16.5, 0.875) {$\vdots$\strut}; \node at (16.5, 2.125) {$\vdots$\strut}; \node at (11.5,-4.625) {$\vdots$\strut}; \node at (11.5,-3.375) {$\vdots$\strut}; \node at (11.5,-1.375) {\scalebox{2}{$\vdots$\strut}}; \node at ( 8.5, 1 ) {\scalebox{2}{$\cdots$\strut}}; \end{tikzpicture} \begin{tikzpicture}[scale=0.75*\hsize/31cm] \node at (0,0) {\makebox(0,0)[br]{c)\strut}}; \fill [background] (17,0) \meanderPath{22,23,-8,3,-28,17}; \draw [thin] (0,0) -- (31,0); \drawMeanderPath{22}{23,-8,3,-28,17} \drawMeanderPath{15}{30,-1,10} \drawMeanderArc{6}{5} \draw [line width=1mm, ->] (5.5,0) \meanderPath{-25.5,19.5}; \node [above] at ( 5.5,2.25) {$\ensuremath{\alpha}_1$}; \node [above] at (22.5,3.75) {$\ensuremath{\alpha}_1 < \ensuremath{\alpha}_{n} < 2\ensuremath{\alpha}_1$}; \node at ( 5.5, 0.75 ) {\scalebox{0.9}{$\vdots$\strut}}; \node at ( 5.5, 1.75 ) {\scalebox{0.9}{$\vdots$\strut}}; \node at (22.5, 0.875) {\scalebox{0.9}{$\vdots$\strut}}; \node at (22.5, 2.125) {\scalebox{0.9}{$\vdots$\strut}}; \node at (22.5, 3.25 ) {\scalebox{0.9}{$\vdots$\strut}}; \node at (15.5,-5.625) {\scalebox{0.9}{$\vdots$\strut}}; \node at (15.5,-4.375) {\scalebox{0.9}{$\vdots$\strut}}; \node at (15.5,-6.75 ) {\scalebox{0.9}{$\vdots$\strut}}; \node at (15.5,-1.875) {\scalebox{2}{$\vdots$\strut}}; \node at (12.5, 1.2 ) {\scalebox{2}{$\cdots$\strut}}; \end{tikzpicture} \caption{\label{figOuterRetraction} Outer nose retractions of bi-rainbow meanders.} \end{figure} \begin{lem}[Outer nose retraction]\label{thmOuterNoseRetraction} The number $\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ of connected components of the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ yields: \begin{equation}\label{eqOuterNoseRetraction} \begin{array}{l} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1},\ensuremath{\alpha}_n) \; = \\ \qquad = \left\{\begin{array}{lll} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1}) + \ensuremath{\alpha}_1, & \ensuremath{\alpha}_1 = \ensuremath{\alpha}_n, & (a) \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(2\ensuremath{\alpha}_1-\ensuremath{\alpha}_n,\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1},\ensuremath{\alpha}_1), & \ensuremath{\alpha}_1 < \ensuremath{\alpha}_n < 2\ensuremath{\alpha}_1, & (b) \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1},\ensuremath{\alpha}_1), & 2\ensuremath{\alpha}_1 = \ensuremath{\alpha}_n, & (c) \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1},\ensuremath{\alpha}_1, \ensuremath{\alpha}_n-2\ensuremath{\alpha}_1), \qquad & 2\ensuremath{\alpha}_1 < \ensuremath{\alpha}_n. & (d) \end{array}\right. \end{array}\hspace*{-1em} \end{equation} By reflection, $\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1},\ensuremath{\alpha}_n) = \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_n,\ensuremath{\alpha}_{n-1},\ldots,\ensuremath{\alpha}_2,\ensuremath{\alpha}_1)$, the case $\ensuremath{\alpha}_n < \ensuremath{\alpha}_1$ is included. \end{lem} \begin{proof} In case (a), $\ensuremath{\alpha}_1 = \ensuremath{\alpha}_n$, we remove $\ensuremath{\alpha}_1$ outer cycles, i.e.~connected components of the meander. In case (d), $2\ensuremath{\alpha}_1 \le \ensuremath{\alpha}_n$, we retract the full first (leftmost) upper rainbow family of size $\ensuremath{\alpha}_1$, as shown in figure \ref{figOuterRetraction}a. We hit the right half of the last (rightmost) upper rainbow family and retract further until the retracted nose of size $\ensuremath{\alpha}_1$ arrives left of the remaining $\ensuremath{\alpha}_n-2\ensuremath{\alpha}_1$ arcs of the last rainbow family. In the boundary case (c), $2\ensuremath{\alpha}_1 = \ensuremath{\alpha}_n$, nothing remains of last rainbow family, see figure \ref{figOuterRetraction}b. In case (b), $\ensuremath{\alpha}_1 < \ensuremath{\alpha}_n < 2\ensuremath{\alpha}_1$, we retract only the inner part of the first rainbow family, such that we just hit the innermost arc of the last rainbow family, as shown in figure \ref{figOuterRetraction}c. Thereby, after retraction, the last rainbow family will remain a (non-branched) rainbow family. To hit the innermost arc, the retracted nose must consist of $\ensuremath{\alpha}_n-\ensuremath{\alpha}_1$ arcs. Therefore, $\ensuremath{\alpha}_1-(\ensuremath{\alpha}_n-\ensuremath{\alpha}_1)=2\ensuremath{\alpha}_1-\ensuremath{\alpha}_n$ arcs of the first rainbow family and $\ensuremath{\alpha}_n-(\ensuremath{\alpha}_n-\ensuremath{\alpha}_1)=\ensuremath{\alpha}_1$ arcs of the last rainbow family remain. \end{proof} \begin{figure} \centering \begin{tikzpicture}[ scale=0.5 ]\small \begin{scope}[ shift={(0,0)} ] \draw [thick] (2,6)--(2,2)--(0,2)--(0,0)--(2,0)--(2,2)--(5,2)--(5,5)--(7,5); \draw [thin, dotted] (0,0)--(6,6); \draw [thick, dashed] (2,0)--(2,2)--(4,2); \node [above right] at (0,2) {$\alpha_1$}; \node [below left ] at (2,6) {$\alpha_2$}; \node [below right] at (1,0) {$\alpha_n=\alpha_1$}; \node [below right] at (4,2) {$\alpha_{n-1}$}; \path [pattern=north west lines] (0,0) rectangle ++(2,2); \end{scope} \begin{scope}[ shift={(8,0)} ] \draw [thick] (2,6)--(2,2)--(0,2)--(0,0)--(3,0)--(3,3)--(5,3)--(5,5)--(7,5); \draw [thin, dotted] (0,0)--(6,6); \draw [thin, dashed] (2,0)--(2,2); \draw [thick, densely dashed] (1,2)--(1,1)--(3,1); \node [above right] at (0,2) {$\alpha_1$}; \node [below left ] at (2,6) {$\alpha_2$}; \node [below right] at (4,3) {$\alpha_{n-1}$}; \node [below right] at (1,0) {$\alpha_1<\alpha_n<2\alpha_1$}; \path [pattern=north west lines] (0,0) rectangle ++(2,2); \path [pattern=north east lines] (2,0) rectangle ++(1,1); \path [pattern=grid] (1,1) rectangle ++(1,1); \end{scope} \begin{scope}[ shift={(16,0)} ] \draw [thick] (2,6)--(2,2)--(0,2)--(0,0)--(4,0)--(4,4)--(7,4); \draw [thin, dotted] (0,0)--(6,6); \draw [thin, dashed] (2,0)--(2,2); \draw [thick, densely dashed] (2,2)--(4,2); \node [above right] at (0,2) {$\alpha_1$}; \node [below left ] at (2,6) {$\alpha_2$}; \node [below right] at (5,4) {$\alpha_{n-1}$}; \node [below right] at (2,0) {$\alpha_n=2\alpha_1$}; \path [pattern=north west lines] (0,0) rectangle ++(2,2); \path [pattern=north east lines] (2,0) rectangle ++(2,2); \end{scope} \begin{scope}[ shift={(24,0)} ] \draw [thick] (2,6)--(2,2)--(0,2)--(0,0)--(5,0)--(5,5)--(7,5); \draw [thin, dotted] (0,0)--(6,6); \draw [thin, dashed] (2,0)--(2,2) (2,3)--(3,3); \draw [thick, densely dashed] (2,2)--(3,2)--(3,3)--(5,3); \node [above right] at (0,2) {$\alpha_1$}; \node [below left ] at (2,6) {$\alpha_2$}; \node [below right] at (3,0) {$\alpha_n>2\alpha_1$}; \path [pattern=north west lines] (0,0) rectangle ++(2,2); \path [pattern=north east lines] (2,0) rectangle ++(3,3); \path [pattern=grid] (2,2) rectangle ++(1,1); \end{scope} \end{tikzpicture} \caption{\label{figBilliardOuterRetraction} Cutting of Cartesian billiards to resemble outer nose retractions.} \end{figure} In terms of Cartesian billiards, section \ref{secBilliard}, rainbow families which do not encompass the midpoint are represented as triangles attached to the diagonal. Cases (a) and (c) of (\ref{eqOuterNoseRetraction}) can be achieved by cutting off squares from the billiard domain, see figure \ref{figBilliardOuterRetraction}. The removal of squares which have three sides on the billiard boundary does not change the connectivity of trajectories. Indeed the exit point of an arbitrary trajectory on the square coincides with its entry point, with reflected directions. Cases (b) and (d) can be represented by the removal of two and the subsequent attachment of one square. In case (d), however, we need $\alpha_2>\alpha_n-2\alpha_1$ for the second square to fit inside the billiard domain; otherwise, the procedure fails. The Cartesian billiard benefits at this point from the relation to meander curves, where the nose retraction is always possible. We already see that this lemma provides a strict reduction of the meander. Therefore, its iteration will determine the number of connected components after finitely many steps. In section \ref{secAlgorithms}, we will improve the case $2\ensuremath{\alpha}_1 < \ensuremath{\alpha}_n$ to find an algorithm of logarithmic complexity. In lemma \ref{thmSequenceConnected}, below, we will use one of the inverse operations of (\ref{eqOuterNoseRetraction}), \begin{equation}\label{eqInverseOuterNoseRetraction} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n) \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_n,\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_{n-1},2\ensuremath{\alpha}_n) \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(2\ensuremath{\alpha}_n,\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_{n-1},2\ensuremath{\alpha}_n,\ensuremath{\alpha}_n), \end{equation} to construct a particular sequence of connected bi-rainbow meanders. Instead of retracting the outer noses, as in the lemma above, we now want to retract an inner nose. The middle rainbow turns out to be a particular useful choice. To determine the middle upper rainbow family, we define \begin{equation}\label{eqMiddleRainbowLR} \begin{array}{rcl} L(\ell) &:=& \ensuremath{\alpha}_1+\ensuremath{\alpha}_2+\cdots+\ensuremath{\alpha}_{\ell-1},\\ R(\ell) &:=& \ensuremath{\alpha}_{\ell+1}+\cdots+\ensuremath{\alpha}_n, \end{array} \end{equation} the total numbers of arcs left and right of the $\ell$-th upper rainbow family. The index $m^*$ of the middle rainbow is now given as the unique value, such that \begin{equation}\label{eqMiddleRainbow} L^* = L(m^*) < \ensuremath{\alpha}/2 \le L(m^*+1). \end{equation} For later reference, we note \begin{equation}\label{eqMiddleRainbowR} R^* \;=\; R(m^*) \le \ensuremath{\alpha}/2, \qquad L^*+\alpha_{m^*}+R^* \;=\; \ensuremath{\alpha}. \end{equation} \begin{lem}[Inner nose retraction]\label{thmInnerNoseRetraction} The number $\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ of connected components of the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$, with $L^*, R^*, m^*$ as above, yields: \begin{equation}\label{eqInnerNoseRetraction} \begin{array}{l} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n) \; = \\ \quad = \left\{\begin{array}{lll} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_{m^*-1},\ensuremath{\alpha}_{m^*+1},\ldots,\ensuremath{\alpha}_n) + \ensuremath{\alpha}_{m^*}, & |L^*\!-\!R^*| = 0, & (a) \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_{m^*-1},\ensuremath{\alpha}_{m^*+1},\ldots,\ensuremath{\alpha}_n), & |L^*\!-\!R^*| = \ensuremath{\alpha}_{m^*}, & (b) \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_{m^*-1},\ensuremath{\alpha}_{m^*}-|L^*\!-\!R^*|, \ensuremath{\alpha}_{m^*+1},\ldots,\ensuremath{\alpha}_n), & \mbox{otherwise}. & (c) \end{array}\right. \end{array}\hspace*{-1em} \end{equation} \end{lem} \begin{figure} \centering \begin{tikzpicture}[scale=0.75*\hsize/23cm] \fill [background] (9,0) \meanderPath{-14,-15,20,-9}; \draw [thin] (6,0) -- (29,0); \drawMeanderPath{14}{-15,20,-9} \drawMeanderArc{24}{11} \drawMeanderArc{18}{17} \draw [line width=1mm, ->] (17.5,0) \meanderPath{-11.5}; \node [above] at (17.5,3.25) {$\ensuremath{\alpha}_{m^*}$}; \node at (17.5, 0.725) {$\vdots$\strut}; \node at (17.5, 2.25 ) {$\vdots$\strut}; \node at (14.5,-0.875) {$\vdots$\strut}; \node at (14.5,-2.125) {$\vdots$\strut}; \node at (14.5,-3.375) {\makebox(0,0){$\vdots$\strut}}; \node at ( 8 , 1.5 ) {\scalebox{2}{$\cdots$\strut}}; \node at (27 , 1.5 ) {\scalebox{2}{$\cdots$\strut}}; \end{tikzpicture} \caption{\label{figInnerRetraction} Inner nose retraction of a bi-rainbow meander.} \end{figure} \begin{proof} If $L^*=R^*$, then the middle family forms $m^*$ closed cycles, as each arc of the family matches an arc of the lower family. Otherwise, we retract the inner part of the $m^*$-th upper rainbow family, such that we just hit the innermost arc of the lower rainbow family, as in figure \ref{figInnerRetraction}. To achieve this, the retracted nose must consist of $|L^*-R^*|$ arcs. Then, $\ensuremath{\alpha}_{m^*}-|L^*-R^*|$ arcs remain in the middle upper rainbow family. The lower family remains a (non-branched) rainbow family. In the special case $|L^*-R^*| = \ensuremath{\alpha}_{m^*}$, this procedure retracts the full $m^*$-th rainbow family. In fact, in this case, $R^*=\ensuremath{\alpha}/2$ and the midpoint lies between the $m^*$-th and its right neighbouring family. Either of both families could be removed. \end{proof} \begin{figure} \centering \begin{tikzpicture}[ scale=0.5 ]\small \begin{scope}[ shift={(-6.5,1)} ] \draw [thick] (-5,0)--(1.5,0)--(1.5,1.5)--(0,1.5)--(0,-5)--(-5,-5); \draw [thin, dotted] (-5,-5)--(1.5,1.5); \node [above right] at (0,1.5) {$\alpha_{m^*}$}; \node [above right] at (-5, 0) {$\alpha_{m^*-1}$}; \node [above left ] at ( 0,-5) {$\alpha_{m^*+1}$}; \node [below left ] at ( 0, 0) {$R-L=0$}; \path [pattern=north west lines] (0,0) rectangle ++(1.5,1.5); \end{scope} \begin{scope}[ shift={(0,0)} ] \draw [thick] (-4,0)--(0,0)--(0,3)--(2,3)--(2,-3)--(-3,-3)--(-3,-4); \draw [thin, dotted] (-4,-4)--(2,2); \draw [thick, densely dashed] (0,1)--(2,1); \node [above right] at ( 0, 3) {$\alpha_{m^*}$}; \node [above right] at (-4, 0) {$\alpha_{m^*-1}$}; \node [above left ] at ( 2,-3) {$\alpha_{m^*+1}>R-L$}; \node [below left ] at ( 2, 0) {$0<R-L<\alpha_{m^*}$}; \path [pattern=north west lines] (0,1) rectangle ++(2,2); \end{scope} \begin{scope}[ shift={(7,0)} ] \draw [thick] (-4,0)--(0,0)--(0,3)--(3,3)--(3,-2)--(-2,-2)--(-2,-4); \draw [thin, dotted] (-4,-4)--(3,3); \draw [thick, densely dashed] (0,0)--(3,0); \node [above right] at ( 0, 3) {$\alpha_{m^*}$}; \node [above right] at (-4, 0) {$\alpha_{m^*-1}$}; \node [below right] at (-2,-2) {$\alpha_{m^*+1}>R-L$}; \node [below left ] at ( 3, 0) {$R-L=\alpha_{m^*}$}; \path [pattern=north west lines] (0,0) rectangle ++(3,3); \end{scope} \begin{scope}[ shift={(14,-1)} ] \draw [thick] (-3,0)--(0,0)--(0,4)--(3,4)--(3,2)--(2,2)--(2,1)--(1,1)--(1,-3) --(-3,-3); \draw [thin, dotted] (-3,-3)--(3,3); \draw [thin, dashed] (0,1)--(3,1)--(3,2); \node [above right] at ( 0, 4) {$\alpha_{m^*}$}; \node [above right] at (-3, 0) {$\alpha_{m^*-1}$}; \node [above left ] at ( 1,-3) {$\alpha_{m^*+3}$}; \path [pattern=dots] (0,1) rectangle ++(3,3); \draw [ shift={(2.5,1.5)}, scale=0.5ex/1cm, x={(1,-0.3)}, y={(0.3,1)}, very thick ] (0,0) -- ++(0,-15) -- ++(5,8) -- ++(0,-12) ++(-1,1) -- ++(-1,1) -- ++(2,-7) -- ++(2,7) -- ++(-2,-2) -- ++(-1,1); \end{scope} \end{tikzpicture} \caption{\label{figBilliardInnerRetraction} Cutting of Cartesian billiards to resemble inner nose retractions.} \end{figure} We can again try to rephrase the inner nose retraction in terms of Cartesian billiards, section \ref{secBilliard}. Note, that the middle rainbow $m^*$ contains the upper arcs which encompass the midpoint. It is the only rainbow family which is not represented by a triangle over the diagonal. We find simple cuts of single squares, see figure \ref{figBilliardInnerRetraction}. Cases (b) and (c) require, however, that the new middle family is the old one or a direct neighbour. Otherwise, i.e.~if the neighbouring family is too small, there is no full square available, as seen in the last picture of figure \ref{figBilliardInnerRetraction}. Again, the meander view point is the preferred one. Note also the inverse operation of (\ref{eqInnerNoseRetraction})(c). For arbitrary $\ell$ we find \begin{equation}\label{eqInverseInnerNoseRetraction} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n) \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_{\ell-1},\ensuremath{\alpha}_\ell+|L(\ell)-R(\ell)|, \ensuremath{\alpha}_{\ell+1},\ldots,\ensuremath{\alpha}_n). \end{equation} Indeed, the $\ell$-th family of size $\ensuremath{\alpha}_\ell+|L(\ell)-R(\ell)|$ becomes the middle rainbow family $m^*$ on the right-hand side, without changing $L$ and $R$. \section{Non-/existing gcd formulae\label{secNoGcd}} In this section, we try to express the number $\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathcal{RM}\strut}}) = \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ of connected components of a bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}=(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ by the greatest common divisor of expressions in $\ensuremath{\alpha}_\ell$. Indeed, for $n\le3$: \begin{equation}\label{eqGcdOfRainbowsAdv} \begin{array}{lcl} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1) &=& \ensuremath{\alpha}_1, \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2) &=& \mathop{\gcd}\nolimits(\ensuremath{\alpha}_1, \ensuremath{\alpha}_2), \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ensuremath{\alpha}_3) &=& \mathop{\gcd}\nolimits(\ensuremath{\alpha}_1+\ensuremath{\alpha}_2,\ensuremath{\alpha}_2+\ensuremath{\alpha}_3), \end{array} \end{equation} see theorem \ref{thmGcdOfRainbows} below. Before we show that there do not exist similar expressions of $\ensuremath{\mathop{\mathcal{Z}\strut}}$ for $n\ge 4$ in theorem \ref{thmNoGcd}, we establish a particular family of examples and a scaling property of $\ensuremath{\mathop{\mathcal{Z}\strut}}$ in the following two preparatory lemmata. \begin{lem}[Sequence of connected meanders]\label{thmSequenceConnected} Let $\tilde{\ensuremath{\alpha}} \ge 2$, $\ensuremath{\alpha}^* \ge 1$, and $\tilde{n}\ge 0$ be arbitrary integers. Then the bi-rainbow meanders \begin{equation}\label{eqSequenceConnected} \begin{array}{lll} \ensuremath{\mathop{\mathcal{RM}\strut}}( \underbrace{2\tilde{\ensuremath{\alpha}},\ldots,2\tilde{\ensuremath{\alpha}}}_{\tilde{n}}, \tilde{\ensuremath{\alpha}}-1, \ensuremath{\alpha}^*, \underbrace{2\tilde{\ensuremath{\alpha}},\ldots,2\tilde{\ensuremath{\alpha}}}_{\tilde{n}}, \tilde{\ensuremath{\alpha}} ) & , & n = 2\tilde{n}+3 \mbox{ odd}, \\ \ensuremath{\mathop{\mathcal{RM}\strut}}( \tilde{\ensuremath{\alpha}}, \underbrace{2\tilde{\ensuremath{\alpha}},\ldots,2\tilde{\ensuremath{\alpha}}}_{\tilde{n}}, \tilde{\ensuremath{\alpha}}-1, \ensuremath{\alpha}^*, \underbrace{2\tilde{\ensuremath{\alpha}},\ldots,2\tilde{\ensuremath{\alpha}}}_{\tilde{n}+1}) & , & n = 2\tilde{n}+4 \mbox{ even}, \end{array} \end{equation} are connected, i.e.~$\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathcal{RM}\strut}}) = 1$. In particular, the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}( \tilde{\ensuremath{\alpha}}, \tilde{\ensuremath{\alpha}}-1, \ensuremath{\alpha}^*, 2\tilde{\ensuremath{\alpha}} )$ with four upper families is connected. \end{lem} \begin{proof} We proof the connectedness of the meanders by induction over $n$. The bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\tilde{\ensuremath{\alpha}}-1, \ensuremath{\alpha}^*, \tilde{\ensuremath{\alpha}})$ is connected due to the $\mathop{\gcd}\nolimits$-formula (\ref{eqGcdOfRainbowsAdv}). The inverse outer nose retraction (\ref{eqInverseOuterNoseRetraction}) yields \[ \begin{array}{rcl} \ensuremath{\mathop{\mathcal{Z}\strut}}( \underbrace{2\tilde{\ensuremath{\alpha}},\ldots,2\tilde{\ensuremath{\alpha}}}_{\tilde{n}}, \tilde{\ensuremath{\alpha}}-1, \ensuremath{\alpha}^*, \underbrace{2\tilde{\ensuremath{\alpha}},\ldots,2\tilde{\ensuremath{\alpha}}}_{\tilde{n}}, \tilde{\ensuremath{\alpha}} ) &=& \ensuremath{\mathop{\mathcal{Z}\strut}}( \tilde{\ensuremath{\alpha}}, \underbrace{2\tilde{\ensuremath{\alpha}},\ldots,2\tilde{\ensuremath{\alpha}}}_{\tilde{n}}, \tilde{\ensuremath{\alpha}}-1, \ensuremath{\alpha}^*, \underbrace{2\tilde{\ensuremath{\alpha}},\ldots,2\tilde{\ensuremath{\alpha}}}_{\tilde{n}+1}) \\ &=& \ensuremath{\mathop{\mathcal{Z}\strut}}( \underbrace{2\tilde{\ensuremath{\alpha}},\ldots,2\tilde{\ensuremath{\alpha}}}_{\tilde{n}+1}, \tilde{\ensuremath{\alpha}}-1, \ensuremath{\alpha}^*, \underbrace{2\tilde{\ensuremath{\alpha}},\ldots,2\tilde{\ensuremath{\alpha}}}_{\tilde{n}+1}, \tilde{\ensuremath{\alpha}} ). \end{array} \] This proves the claim with the above base clause $\tilde{n}=0$. \end{proof} \begin{figure} \centering \setlength{\unitlength}{0.025\textwidth} \begin{tikzpicture}[scale=0.99*\hsize/39cm] \begin{scope}[shift={(4,0)}] \draw [thin] (-4,0) -- (5,0); \drawMeanderArc { -2}{ -3} \drawMeanderArc { 0}{ -1} \drawMeanderRainbow{ 4}{ 1}{ 4} \drawMeanderRainbow{ -3}{ 4}{ 8} \end{scope} \begin{scope}[shift={(6,-4)}] \draw [thin] (-6,0) -- (7,0); \drawMeanderRainbow{ -2}{ -5}{ 4} \drawMeanderArc { 0}{ -1} \drawMeanderRainbow{ 4}{ 1}{ 4} \drawMeanderArc { 6}{ 5} \drawMeanderRainbow{ -5}{ 6}{12} \end{scope} \begin{scope}[shift={(8,-9)}] \draw [thin] (-8,0) -- (9,0); \drawMeanderArc { -6}{ -7} \drawMeanderRainbow{ -2}{ -5}{ 4} \drawMeanderArc { 0}{ -1} \drawMeanderRainbow{ 4}{ 1}{ 4} \drawMeanderRainbow{ 8}{ 5}{ 4} \drawMeanderRainbow{ -7}{ 8}{16} \end{scope} \begin{scope}[shift={(28,-8)}] \draw [thin] (-10,0) -- (11,0); \drawMeanderRainbow{ -6}{ -9}{ 4} \drawMeanderRainbow{ -2}{ -5}{ 4} \drawMeanderArc { 0}{ -1} \drawMeanderRainbow{ 4}{ 1}{ 4} \drawMeanderRainbow{ 8}{ 5}{ 4} \drawMeanderArc { 10}{ 9} \drawMeanderRainbow{ -9}{ 10}{20} \end{scope} \begin{scope}[shift={(26,0)}] \draw [thin] (-12,0) -- (13,0); \drawMeanderArc {-10}{-11} \drawMeanderRainbow{ -6}{ -9}{ 4} \drawMeanderRainbow{ -2}{ -5}{ 4} \drawMeanderArc { 0}{ -1} \drawMeanderRainbow{ 4}{ 1}{ 4} \drawMeanderRainbow{ 8}{ 5}{ 4} \drawMeanderRainbow{ 12}{ 9}{ 4} \drawMeanderRainbow{-11}{ 12}{24} \end{scope} \end{tikzpicture} \caption{\label{figSequenceConnected} Sequence of connected meanders generated from $\ensuremath{\mathop{\mathcal{RM}\strut}}(1,1,2)$ by iterated inverse outer nose retractions.} \end{figure} Note, how $\ensuremath{\alpha}^*$ always represents the middle upper family, as introduced in (\ref{eqMiddleRainbow}), with $L^* = R^*-1$. See figure \ref{figSequenceConnected} for an illustration. \begin{lem}[Scaling]\label{thmComponentsScaling} Let $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1, \ldots, \ensuremath{\alpha}_n)$ be an arbitrary bi-rainbow meander and $\lambda>0$ a positive integer. Then, the number of connected components of the bi-rainbow meander $\lambda\ensuremath{\mathop{\mathcal{RM}\strut}}:= \ensuremath{\mathop{\mathcal{RM}\strut}}(\lambda\ensuremath{\alpha}_1,\ldots,\lambda\ensuremath{\alpha}_n)$ scales linearly: \begin{equation}\label{eqComponentsScaling} \ensuremath{\mathop{\mathcal{Z}\strut}}(\lambda\ensuremath{\mathop{\mathcal{RM}\strut}}) \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(\lambda\ensuremath{\alpha}_1, \ldots, \lambda\ensuremath{\alpha}_n) \;=\; \lambda \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1, \ldots, \ensuremath{\alpha}_n) \;=\; \lambda \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathcal{RM}\strut}}). \end{equation} \end{lem} \begin{proof} Let $(1,\ldots,2\ensuremath{\alpha})$ again denote the intersections of the original bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}$ with the horizontal axis. The scaled meander $\lambda\ensuremath{\mathop{\mathcal{RM}\strut}}$ replaces each arc of $\ensuremath{\mathop{\mathcal{RM}\strut}}$ by $\lambda$ arcs. If the original arc of $\ensuremath{\mathop{\mathcal{RM}\strut}}$ connects $a$ and $b$ on the axis, then the corresponding arcs of $\lambda\ensuremath{\mathop{\mathcal{RM}\strut}}$ connect $\lambda (a-1) + \ell$ with $\lambda b + 1 - \ell$, for $\ell = 1,\ldots,\lambda$. Furthermore, each arc of $\ensuremath{\mathop{\mathcal{RM}\strut}}$ connects an odd with an even point, see (\ref{eqTranspositionOddEven}). Therefore, $\lambda\ensuremath{\mathop{\mathcal{RM}\strut}}$ consists of $\lambda$ copies of $\ensuremath{\mathop{\mathcal{RM}\strut}}$. Indeed, each copy intersects the horizontal axis in one of the sets $\{\ell,2\lambda+1-\ell,2\lambda+\ell,4\lambda+1-\ell,4\lambda+\ell, \ldots,2\ensuremath{\alpha}+1-\ell\}$, $\ell=1,\ldots,\lambda$. This immediately yields the scaling (\ref{eqComponentsScaling}) of the number of connected components. \end{proof} We are now well prepared to prove the theorem claimed in the introduction: \begin{thm}\label{thmNoGcd} Let $n\ge4$ be given. Then there do not exist homogeneous polynomials $f_1,f_2 \in \mathds{Z}[x_1,\ldots,x_n]$ of arbitrary degree with integer coefficients such that the number of connected components $\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ of every bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ is given by the $\mathop{\gcd}\nolimits(f_1(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n),f_2(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n))$. In other words: to every choice of polynomials $f_1, f_2$, we find a counterexample. \end{thm} \begin{proof} We assume the contrary: let $n\ge 4$ and $f_1, f_2 \in \mathds{Z}[x_1,\ldots,x_n]$ be homogeneous polynomials with integer coefficients, such that for all bi-rainbow meanders $\ensuremath{\mathop{\mathcal{RM}\strut}} = \ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ \begin{equation}\label{eqGcdAssumption} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n) \;=\; \mathop{\gcd}\nolimits(f_1(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n),f_2(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)). \end{equation} We shall find the contradiction in four steps: \begin{enumerate}[itemsep=0ex,topsep=0ex,parsep=0ex] \item Show that one of $f_1, f_2$ must have degree one, i.e.~is linear. \item Show that both of $f_1, f_2$ must have degree one, i.e.~are linear. \item Find conditions on the parity of the coefficients of $f_1, f_2$. \item Show the contradiction by the pigeonhole principle. \end{enumerate} \emph{Step 1. [Show that one of $f_1, f_2$ must have degree one, i.e.~is linear.]} Let $d_j \in \mathds{N}$ denote the degree of $f_j$, $j=1,2$. Then, for all positive integers $\lambda$, \[ f_j(\lambda\ensuremath{\alpha}_1, \ldots, \lambda\ensuremath{\alpha}_n) \;=\; \lambda^{d_j}(\ensuremath{\alpha}_1, \ldots, \ensuremath{\alpha}_n). \] Take an arbitrary bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\bar{\ensuremath{\alpha}}) := \ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ and $\lambda$ co-prime to $f_1(\bar\ensuremath{\alpha})$ and $f_2(\bar\ensuremath{\alpha})$. The scaling lemma \ref{thmComponentsScaling} and assumption (\ref{eqGcdAssumption}) then yield \[ \begin{array}{rcl} \lambda \ensuremath{\mathop{\mathcal{Z}\strut}}(\bar\ensuremath{\alpha}) &=& \ensuremath{\mathop{\mathcal{Z}\strut}}(\lambda\bar\ensuremath{\alpha}) \;=\; \mathop{\gcd}\nolimits(f_1(\lambda\bar\ensuremath{\alpha}),f_2(\lambda\bar\ensuremath{\alpha})) \;=\; \lambda^{\min(d_1,d_2)}\mathop{\gcd}\nolimits(f_1(\bar\ensuremath{\alpha}),f_2(\bar\ensuremath{\alpha})) \\&=& \lambda^{\min(d_1,d_2)}\ensuremath{\mathop{\mathcal{Z}\strut}}(\bar\ensuremath{\alpha}). \end{array} \] Therefore, $\min(d_1,d_2) = 1$ and one of the polynomials must indeed be linear. \emph{Step 2. [Show that both of $f_1, f_2$ must have degree one, i.e.~are linear.]} Without loss of generality, $d_1=1$ and $d_2 \ge 1$, by step 1. Let $\ensuremath{\mathop{\mathcal{RM}\strut}}(\bar{\ensuremath{\alpha}})$ be the connected bi-rainbow meander of lemma \ref{thmSequenceConnected} with $n$ upper rainbow families. Indeed, the sequence (\ref{eqSequenceConnected}) contains one element for each $n\ge 4$. Again, we use the scaling lemma \ref{thmComponentsScaling} and assumption (\ref{eqGcdAssumption}) to obtain \[ \begin{array}{rcl} \lambda &=& \lambda \ensuremath{\mathop{\mathcal{Z}\strut}}(\bar\ensuremath{\alpha}) \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(\lambda\bar\ensuremath{\alpha}) \;=\; \mathop{\gcd}\nolimits(f_1(\lambda\bar\ensuremath{\alpha}),f_2(\lambda\bar\ensuremath{\alpha})) \;=\; \mathop{\gcd}\nolimits(\lambda f_1(\bar\ensuremath{\alpha}), \lambda^{d_2} f_2(\bar\ensuremath{\alpha})) \\&=& \lambda \mathop{\gcd}\nolimits(f_1(\bar\ensuremath{\alpha}), \lambda^{d_2-1} f_2(\bar\ensuremath{\alpha})). \end{array} \] Observe that $f_1$ must depend on $\ensuremath{\alpha}^*$, the size of the middle arc collection of $\bar\ensuremath{\alpha}$. Indeed, taking a different bi-rainbow meander which only keeps the family $\ensuremath{\alpha}^*$, \[ \begin{array}{rcll} \ensuremath{\mathop{\mathcal{Z}\strut}}(1,\ldots,1,\ensuremath{\alpha}^*,1,\ldots,1) &=& \ensuremath{\alpha}^*+\frac{n-1}{2}, & \mbox{for odd } n, \mbox{ and} \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(1,\ldots,1,\ensuremath{\alpha}^*,1,\ldots,1,2) &=& \ensuremath{\alpha}^*+\frac{n-2}{2}, & \mbox{for even } n, \end{array} \] become arbitrarily large for $\ensuremath{\alpha}^*\to\infty$. Here, we have chosen $L^*=R^*$, compare with lemma \ref{thmInnerNoseRetraction}. As $f_1$ is linear, this forces the coefficient of $\ensuremath{\alpha}^*$ of $f_1$ to be non-zero. Now, we choose $\alpha^*$ large enough, such that $f_1(\bar\ensuremath{\alpha}) > 1$. Then we select $\lambda = f_1(\bar\ensuremath{\alpha})$ to find \[ \begin{array}{rcl} \lambda &=& \lambda \mathop{\gcd}\nolimits(f_1(\bar\ensuremath{\alpha}), \lambda^{d_2-1} f_2(\bar\ensuremath{\alpha})) \;=\; \lambda^2, \qquad \mbox{for\ } d_2 \ge 2. \end{array} \] This is a contradiction. Therefore, $d_2=1$ as claimed. \emph{Step 3. [Conditions on the parity of the coefficients of $f_j$.]} Let \[ f_j(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n) \;=\; \sum_{\ell=1}^n f_{j,\ell} \ensuremath{\alpha}_\ell, \qquad f_{j,\ell} \in \mathds{Z}, \] be the homogeneous polynomials of degree one. From corollary \ref{thmRainbowComponentsParity} we know that $\ensuremath{\mathop{\mathcal{Z}\strut}}(\bar\ensuremath{\alpha})$ is odd for arbitrary $\bar\ensuremath{\alpha} = (\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$ with exactly one or two odd components. The parity (mod 2) of assumption (\ref{eqGcdAssumption}) applied to bi-rainbow meanders with exactly one odd component $\ensuremath{\alpha}_\ell$ yields \[ 1 \;\equiv\; \mathop{\gcd}\nolimits( f_{1,\ell}, f_{2,\ell} ) \pmod 2. \] Applied to bi-rainbow meanders with exactly two odd components $\ensuremath{\alpha}_k,\ensuremath{\alpha}_\ell$, it yields \[ 1 \;\equiv\; \mathop{\gcd}\nolimits( f_{1,k}+f_{1,\ell}, f_{2,k}+f_{2,\ell} ) \pmod 2, \qquad k\neq\ell. \] Thus, for arbitrary $k\neq\ell$, the following conditions must hold: \begin{equation}\label{eqParity} \begin{array}{rcll} (f_{1,\ell}, f_{2,\ell}) &\not\equiv& (0,0) &\pmod 2, \\ (f_{1,\ell}, f_{2,\ell}) &\not\equiv& (f_{1,k}, f_{2,k}) &\pmod 2. \end{array} \end{equation} \emph{Step 4. [Contradiction by the pigeonhole principle.]} The first condition of (\ref{eqParity}) leaves only three possibilities for $(f_{1,\ell}, f_{2,\ell})$: \[ \{\; (0,1), \; (1,0), \; (1,1) \;\} \pmod 2. \] If $n \ge 4$ then one of the three choices must appear more than once, say at $k$ and $\ell$. But this violates the second condition of (\ref{eqParity}). This is the final contradiction to the initial assumption and proves the impossibility of a $\mathop{\gcd}\nolimits$-formula (\ref{eqGcdAssumption}). \end{proof} \section{Euclidean-like algorithms\label{secAlgorithms}} Nose retractions, as introduced in section \ref{secNoseRetractions}, have been used before to establish finite algorithms on meander curves \cite{CollMagnantWang2012-Meander}. Here, however, we will improve the nose retractions (\ref{eqOuterNoseRetraction}) and (\ref{eqInnerNoseRetraction}) to establish rigorous bounds on the complexity of the resulting algorithms. This will show a striking similarity to the calculation of the greatest common divisor by the Euclidean algorithm. \begin{prop}\label{thmGcdOfRainbows} The number of connected components of bi-rainbow meanders with less than four upper families is given by \begin{equation}\label{eqGcdOfRainbows} \begin{array}{lcl} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1) &=& \ensuremath{\alpha}_1, \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2) &=& \mathop{\gcd}\nolimits(\ensuremath{\alpha}_1, \ensuremath{\alpha}_2), \\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ensuremath{\alpha}_3) &=& \mathop{\gcd}\nolimits(\ensuremath{\alpha}_1+\ensuremath{\alpha}_2,\ensuremath{\alpha}_2+\ensuremath{\alpha}_3), \end{array} \end{equation} where $\mathop{\gcd}\nolimits$ denotes the greatest common divisor. \end{prop} \begin{proof} The proof is easily done by induction over $\ensuremath{\alpha}=\sum \ensuremath{\alpha}_k$ using either nose retraction (\ref{eqOuterNoseRetraction}) or (\ref{eqInnerNoseRetraction}). \end{proof} Note that the greatest common divisor is an abbreviation for the Euclidean algorithm: \begin{equation}\label{eqEuclideanAlgorithm} \mathop{\gcd}\nolimits(a_1,a_2) \;=\; \mathop{\gcd}\nolimits(a_2,a_1) \;=\; \left\{ \begin{array}{lll} a_1 &,& a_1=a_2,\\ \mathop{\gcd}\nolimits(a_1, \ensuremath{\mathop{\mathscr{R}\strut}}(a_2,a_1)) &,& a_1<a_2. \end{array}\right. \end{equation} Here, $\ensuremath{\mathop{\mathscr{R}\strut}}(a_2,a_1)$ denotes the remainder of the integer division $a_2/a_1$. This algorithm stops after $\ensuremath{\mathop{\mathcal{O}\strut}}(\log a_1 + \log a_2)$ steps. Indeed, $\ensuremath{\mathop{\mathscr{R}\strut}}(a_2,a_1) < a_2/2$. The number of bits needed to encode the problem is strictly decreased in each step. Here, we assume the elementary operations of (\ref{eqEuclideanAlgorithm}) to be of complexity $\ensuremath{\mathop{\mathcal{O}\strut}}(1)$. Complexity of arithmetic of large integers could be considered but is not our focus here. Turning back to the nose-retraction algorithm, denote \[ \ensuremath{b} \;=\; \sum_{\ell=1}^n \Big\lceil \log_2 (\ensuremath{\alpha}_\ell+1) \Big\rceil \;=\; \ensuremath{\mathop{\mathcal{O}\strut}}\left( \sum_{\ell=1}^n \log \ensuremath{\alpha}_\ell \right) \] the number of bits needed to encode the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$. We want to improve the nose retractions (\ref{eqOuterNoseRetraction}) and (\ref{eqInnerNoseRetraction}) to decrease $b$. Among the outer nose retractions (\ref{eqOuterNoseRetraction}), cases (b) and (d) are problematic. However, if $\ensuremath{\alpha}_n$ is large, $\ensuremath{\alpha}_n > 2\ensuremath{\alpha}/3$, then case (d) can be applied $(n-1)$ times: \[ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n) \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_{n-1},\ensuremath{\alpha}_n-2(\ensuremath{\alpha}-\ensuremath{\alpha}_n)), \qquad \ensuremath{\alpha}_n > 2\ensuremath{\alpha}/3. \] Further iteration yields \begin{equation}\label{eqIteratedOuterNoseRetractionPreD} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n) \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1}, \ensuremath{\mathop{\mathscr{R}\strut}}(\ensuremath{\alpha}_n,2(\ensuremath{\alpha}-\ensuremath{\alpha}_n)), \end{equation} again with the remainder $\ensuremath{\mathop{\mathscr{R}\strut}}$ of the integer division. Similarly, case (b) can be iterated, as long as its condition, $\alpha_1 < \alpha_n < 2\alpha_1$, remains valid: \begin{equation}\label{eqIteratedOuterNoseRetractionPreB} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n) \;=\; \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathscr{R}\strut}}(\ensuremath{\alpha}_1, \alpha_n-\alpha_1),\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1}, \alpha_n-\alpha_1+\ensuremath{\mathop{\mathscr{R}\strut}}(\ensuremath{\alpha}_1,\alpha_n-\alpha_1)). \end{equation} Indeed, during the iteration, the difference $\alpha_n-\alpha_1$ remains constant. \begin{thm}[Outer nose retraction]\label{thmIteratedOuterNoseRetraction} The outer nose retractions yield the algorithm \begin{equation}\label{eqIteratedOuterNoseRetraction} \begin{array}{l} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1},\ensuremath{\alpha}_n) \; = \\ \quad = \left\{\begin{array}{lll} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_n,\ensuremath{\alpha}_{n-1},\ldots,\ensuremath{\alpha}_2,\ensuremath{\alpha}_1), & \ensuremath{\alpha}_1 > \ensuremath{\alpha}_n, & (a)\\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1}) + \ensuremath{\alpha}_1, & \ensuremath{\alpha}_1 = \ensuremath{\alpha}_n, & (b)\\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\mathop{\mathscr{R}\strut}}(\ensuremath{\alpha}_1, \alpha_n\!-\!\alpha_1), \ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1}, \alpha_n\!-\!\alpha_1+\ensuremath{\mathop{\mathscr{R}\strut}}(\ensuremath{\alpha}_1,\alpha_n\!-\!\alpha_1)), & \ensuremath{\alpha}_1 < \ensuremath{\alpha}_n < 2\ensuremath{\alpha}_1, & (c)\\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1},\ensuremath{\alpha}_1), & 2\ensuremath{\alpha}_1 = \ensuremath{\alpha}_n, & (d)\\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1},\ensuremath{\alpha}_1, \ensuremath{\alpha}_n-2\ensuremath{\alpha}_1), & 2\ensuremath{\alpha}_1 < \ensuremath{\alpha}_n < \frac{2}{3}\ensuremath{\alpha}, & (e)\\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1} ), & 2(a-\ensuremath{\alpha}_n) \,\Big|\, \ensuremath{\alpha}_n, & (f)\\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1}, \ensuremath{\mathop{\mathscr{R}\strut}}(\ensuremath{\alpha}_n, 2(\ensuremath{\alpha}-\ensuremath{\alpha}_n))), & otherwise, & (g) \end{array}\right. \end{array}\hspace*{-1em} \end{equation} with logarithmic complexity $\ensuremath{\mathop{\mathcal{O}\strut}}(b)\ensuremath{\mathop{\mathcal{O}\strut}}(n)$ to determine the number $\ensuremath{\mathop{\mathcal{Z}\strut}}$ of connected components of the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$. \end{thm} \begin{proof} The validity of the algorithm follows directly from (\ref{eqOuterNoseRetraction}) of lemma \ref{thmOuterNoseRetraction} and the observations (\ref{eqIteratedOuterNoseRetractionPreD}, \ref{eqIteratedOuterNoseRetractionPreB}) above. Note the special case (f), which is case (g) with zero remainder. Cases (b,c,d,f,g) reduce the number $b$ of bits. Case (a) cannot be applied twice in succession, in fact it could be replaced by symmetric copies of (c--g). Finally, case (e) can be applied at most $(n-2)$ times in succession. This yields the claimed complexity of the algorithm. \end{proof} Although we have found an algorithm of similar complexity than the Euclidean algorithm, the number of cases is quite large. The inner nose retraction (\ref{eqInnerNoseRetraction}) turns out to be more beautiful. \begin{thm}[Inner nose retraction]\label{thmIteratedInnerNoseRetraction} The inner nose retraction yields the algorithm \begin{equation}\label{eqIteratedInnerNoseRetraction} \begin{array}{l} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2,\ldots,\ensuremath{\alpha}_{n-1},\ensuremath{\alpha}_n) \; = \\ \quad = \left\{\begin{array}{lll} \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_{m^*-1},\ensuremath{\alpha}_{m^*+1},\ldots,\ensuremath{\alpha}_n) + \ensuremath{\alpha}_{m^*}, & L^* = R^*, & (a)\\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_{m^*-1},\ensuremath{\alpha}_{m^*+1},\ldots,\ensuremath{\alpha}_n) , & |L^*\!-\!R^*| \,\Big|\, \ensuremath{\alpha}_{m^*}, & (b)\\ \ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_{m^*-1},\ensuremath{\mathop{\mathscr{R}\strut}}(\ensuremath{\alpha}_{m^*},|L^*\!-\!R^*|), \ensuremath{\alpha}_{m^*+1},\ldots,\ensuremath{\alpha}_n), & otherwise, & (c) \end{array}\right. \end{array}\hspace*{-1em} \end{equation} with logarithmic complexity $\ensuremath{\mathop{\mathcal{O}\strut}}(b)\ensuremath{\mathop{\mathcal{O}\strut}}(n)$ to determine the number $\ensuremath{\mathop{\mathcal{Z}\strut}}$ of connected components of the bi-rainbow meander $\ensuremath{\mathop{\mathcal{RM}\strut}}(\ensuremath{\alpha}_1,\ldots,\ensuremath{\alpha}_n)$. The values $m^*, L^*, R^*$ denote the index of the middle rainbow family and the total numbers of arcs in the left and right rainbow families, as defined in (\ref{eqMiddleRainbow}). \end{thm} \begin{proof} The validity of the algorithm follows again by iteration of (\ref{eqInnerNoseRetraction}) of lemma \ref{thmInnerNoseRetraction}. Indeed, as long as $\ensuremath{\alpha}_{m^*}$ after application of (\ref{eqInnerNoseRetraction})(c) is not smaller than $|L^*-R^*|$, the $m^*$-th family remains the middle one. Furthermore, the values $L^*$ and $R^*$ do not change. Iteration yields case (c) of (\ref{eqIteratedInnerNoseRetraction}) with the special case (b) of zero remainder. All cases of (\ref{eqIteratedInnerNoseRetraction}) reduce the number $b$ of bits. However, the values $m^*, L^*, R^*$ need to be computed in every step. Together, we again find the bound $\ensuremath{\mathop{\mathcal{O}\strut}}(b)\ensuremath{\mathop{\mathcal{O}\strut}}(n)$ on the complexity of the algorithm. \end{proof} For rainbow families $\ensuremath{\alpha}_k$ of similar size, the update of $m^*, L^*, R^*$ can be done starting from the old values. The old and new midpoints should then only be $\ensuremath{\mathop{\mathcal{O}\strut}}(1)$ apart. This is similar to theorem \ref{thmIteratedOuterNoseRetraction} where the factor $\ensuremath{\mathop{\mathcal{O}\strut}}(n)$ is due to the case of $\ensuremath{\alpha}_n$ very large with respect to all the other families. The resulting complexity is therefore expected to be rather close to $\ensuremath{\mathop{\mathcal{O}\strut}}(b)$ for ``typical'' bi-rainbow meanders. The inner nose-retraction algorithm (\ref{eqIteratedInnerNoseRetraction}) very closely resembles the Euclidean algorithm (\ref{eqEuclideanAlgorithm}). Indeed, the main operation of both algorithms is the remainder of an integer division. In the case of two upper families, $\ensuremath{\mathop{\mathcal{Z}\strut}}(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2) = \mathop{\gcd}\nolimits(\ensuremath{\alpha}_1,\ensuremath{\alpha}_2)$, both algorithms are in fact identical. \section{Discussion \& outlook\label{secDiscussion}} We have studied the existence of closed expressions for the number of Jordan curves of a bi-rainbow meander. On the one hand, there might be no convenient formula in the greatest common divisor of the sizes of the involved rainbow families. Although theorem \ref{thmNoGcd} does not exclude all possible gcd formulae, its proof shows major obstacles and confirms the respective conjecture 19 of \cite{CollMagnantWang2012-Meander}. The homogeneity assumption, for example, can be weakened by a more careful scaling argument. More arguments $f_k$ of the gcd can be allowed for meanders with more than 4 upper families. Indeed, linearity of $f_k$ follows inductively as in step 2 of the proof of theorem \ref{thmNoGcd}. The pigeonhole principle, step 4, can be applied for bi-rainbows with at least $2^m$ upper families for formulae of the form $\mathop{\gcd}\nolimits(f_1,\ldots,f_m)$ with $m$ arguments. On the other hand, instead of looking for more complicated gcd formulae, we found an Euclidean-like algorithm to determine the number of Jordan curve. Just as the Euclidean algorithm computed the gcd, suitably combined nose retractions determine the number of Jordan curves in logarithmic time. Moreover, the main step computes the remainder of an integer division in close similarity to the Euclidean algorithm. The number of Jordan curves of a bi-rainbow meander thus becomes another number-theoretic quantity similar to the gcd. So far, we have dealt with the special case of bi-rainbow meanders. For most of the applications, this is only a first step. In a forthcoming paper \cite{KarnauhovaLiebscher2015-GeneralNoseRetraction}, we shall describe logarithmic algorithms by nose retractions of general meanders. Note, however, that logarithmic algorithms require a representation of the meander of logarithmic size. Indeed, the algorithm has to at least read the input. If the meander is represented as a product of transpositions or a permutation, sections \ref{secMeanderTranspositions}--\ref{secMeanderShooting}, then no algorithm can be faster than $\ensuremath{\mathop{\mathcal{O}\strut}}(\alpha)$~--- just as a direct inspection of the meander curves. Indeed, traversing all $2\alpha$ arcs certainly provides the number of closed curves. The condensed bracket expression, section \ref{secMeanderBracket}, is therefore a prerequisite of the logarithmic algorithm. Without it, the complexity advantage over a direct inspection is lost. However, at least the structural properties of the nose retractions remain. \clearpage \bibliographystyle{alpha_abbrv} \pdfbookmark{References}{secReferences} \let\OLDthebibliography\thebibliography \renewcommand\thebibliography[1]{ \OLDthebibliography{CMW12} }
{ "timestamp": "2015-04-14T02:12:28", "yymm": "1504", "arxiv_id": "1504.03099", "language": "en", "url": "https://arxiv.org/abs/1504.03099" }
\section{Introduction} Fault tolerance is becoming a key issue that will define success or failure of future programmable quantum computers. Certain quasiparticles, called non-abelian anyons, provide a framework for coherent encoding of quantum information that will require little or no error correction. Our primary goal is to propose an algorithm for efficient circuit synthesis (compilation) in one such non-abelian framework. Braiding non-abelian objects such as anyons and zero-energy modes is the standard gate operation for topological quantum computation \cite{KitaevTop,FreedmanKitaev}. But any physically realistic quantum operations are good for quantum information processing. Besides braiding, measurement is a natural primitive for quantum computation. While measurements in the quantum circuit model in the computational basis can always be postponed to the end, this cannot be done in topological quantum computation. Therefore, we could gain extra computational power by supplementing braiding with measurements. One physically realistic measurement in topological quantum computation is to measure the total charge of a group of anyons, which can be done by either projective measurement or interferometric measurement. In \cite{CuiWang}, we pursue a qutrit generalization of the standard quantum circuit model. Some anyon systems are very natural for the implementation of qutrits, e.g. anyons with quantum dimension $\sqrt{3}$. One such anyon system is $SU(2)_4$---the first of the sequence of metaplectic anyons \cite{HNW}. While braiding alone for $SU(2)_4$ is not universal as it is the case with the Majorana system, the metaplectic system is no longer like Majorana when measurement is added. We proved that for $SU(2)_4$, braiding supplemented by projective measurement of the total charge of a pair of metapletic anyons is universal for qutrit quantum computation (see \cite{CuiWang}). Our motivation for weakly-integral anyon framework is potential realization of metaplectic anyons and zero modes in physical systems. Majoranas are closer to be well-controlled, but their computational power is impacted by the high complexity and cost of a universal basis \cite{SarmaFreNayak}. Metaplectic models strike the right balance between controllability and universality. There is some recent numerical evidence that $SU(2)_4$ might be realized in the $\nu=\frac{8}{3}$ fractional quantum Hall liquid (see \cite{PetersonEtAl}). There is also recent research potentially leading to practical recipes for synthesizing and braiding parafermionic zero modes in fractional quantum Hall liquids pared with $s$-wave superconductors (see \cite{ClarkeAlicea}). These are essentially recipes generalizing the synthesis of Majorana zero modes in the same general set up. In particular it is theoretically feasible that a species of $Z_4$-parafermion zero modes exhibiting $SU(2)_4$ statistics can be realized along these lines (ibid.). Therefore, $SU(2)_4$ is a promising viable path to universal topological quantum computation. In this paper we build upon the metaplectic model definition (\cite{CuiWang}) and develop algorithms for effective synthesis of efficient $n$-qutrit circuits over the model. Given a unitary target gate $U$ and an arbitrary small target precision $\varepsilon>0$ a circuit approximating $U$ to precision $\varepsilon$ is considered \emph{efficient} if the number of primitive gates in that circuit is asymptotically proportional to $\log{1/\varepsilon}$. An algorithm for synthesis of such efficient circuit is considered \emph{effective} if it can be completed on a classical computer in expected runtime that is polynomial in $\log{1/\varepsilon}$. We develop two flavors of an effective general synthesis algorithm. The first flavor makes a distinction between the parameter approximation cost and entanglement cost in an efficient circuit and produces such circuits with upper complexity bound in $O(3^{2\,n}\,(\log_3{1/\varepsilon}+2 \, n+\log(\log(1/\varepsilon)))) +O((9\,(2+\sqrt{5}))^n)$. The second flavor makes no such distinction and produces efficient circuits with upper complexity bound in $O(n\,3^{2\,n}\,(\log_3{1/\varepsilon}+2\, n+\log(\log(1/\varepsilon))))$. While the first flavor of our algorithm is clearly asymptotically superior when $n$ is fixed and $\varepsilon \rightarrow 0$, there is obviously a practical tradeoff threshold between the two flavors when $\varepsilon$ is fixed and $n$ is growing. Leading terms of our upper bounds for both complexities are expressed in terms of specific leading coefficients, not merely in the big $O$ terms. The technique for the algorithm is number-theoretic in nature. For any range of practically interesting precisions the circuits produced by our algorithms are significantly more efficient (both in the asymptotical and practical sense) than any hypothetical circuits obtainable by the Dawson-Neilsen version of Solovay-Kitaev algorithm (c.f. \cite{DN}). Our algorithm designs are more broadly applicable to other classes of weakly-integral anyons involving the quantum dimension of $\sqrt{3}$. The paper is organized as follows: in section II we make a very brief introduction into fundamental properties of metaplectic anyons, basic encodings and quantum gates; in section III the core circuit synthesis tools are developed, which are meant to reduce Householder reflections to axial reflections, and axial reflections are then described as metaplectic circuits in section IV. In section V and VI two approaches to synthesizing approximation circuits for arbitrary unitaries are introduced and compared, then the top level overview of the synthesis flow is given in section VII. Section VIII concludes the paper with some open problems and future work directions. \section{Fusion, Braiding, and Basic Gates} For completeness and readability we start with a very brief introduction into the concepts of braiding and fusion, focussing narrowly on the mathematical and logical side of these concepts. For a more broad exposure the reader is encouraged to look up the available tutorials on the subject such as \cite{BeverlandPreskill}, \cite{KitaevTop},\cite{Preskill}. \subsection{Background on fusion and braiding of non-Abelian anyons} \emph{Anyons} are quasiparticles described by a certain Topological quantum field theory (TQFT), and, axiomatically such theory allows for a finite number of \emph{anyon species} that have distinct values $\{ \alpha, \, \beta, \, \gamma, \ldots \}$ of \emph{topological charge}. For example, one of the simplest theories leads to Fibonacci anyons that allows only two values of charge, $1$ and $\tau$, where $\tau$ is the charge of a non-trivial anyon and $1$ is the charge of \lq\lq no-anyon" or vacuum (\cite{Preskill}). Given an ensemble of anyons $(a_1,\, a_2, \ldots, a_n)$ the structure of their collective state space $H$ depends on the underlying theory. If we measured collective topological charge of some subsequence of anyons in the ensemble, say $(a_i,\ldots, a_j), 1 \leq i < j \leq n$, the charge would probabilistically assume some value $c \in \{ \alpha, \, \beta, \, \gamma, \ldots \}$. After this is done, the state space of the ensemble is reduced to some smaller subspace $H_{i,j,c} \subset H$. This is the phenomenon known as \emph{fusion} and the resulting topological charge is often called the \emph{fusion charge}. Once we have measured out the fusion charge of several subsequences, we may end up with a one-dimensional state space, or, up to a global phase, with one specific state. This state can be characterized by the collection of measurement outcomes, and it is an established practice to represent such collection as a tree, called \emph{fusion tree}. \begin{figure}[bt] \includegraphics[width=3.5in]{Fusion1a.pdf} \caption{\label{fig:fusion:tree:one} A fusion tree for anyonic quartet. The left pair of anyons has the fusion charge $c_{12}$, the right pair has the charge of $c_{34}$ and the overall charge of the quartet is $c_{14}$.} \end{figure} As a segway into the next subsection consider the following \begin{example} Theory of \emph{metaplectic anyons} allows five values of topological charge: $\{1,Z,X,X',Y\}$. Consider a quartet of anyons of type $X$, i.e. an ensemble $(a_1,a_2,a_3,a_4)$ where each anyon $a_i$ has the charge of $X$. Let us measure the charge $c_{12}$ of the pair $(a_1,a_2)$, then the charge $c_{34}$ the pair $(a_3,a_4)$ and then the charge $c_{14}$ of the entire quartet. This sequence of measurement is represented by the tree shown in figure \ref{fig:fusion:tree:one}. \end{example} Possible outcomes of fusion charge measurement are dictated by a set of \emph{fusion rules}. A fusion rule has the following synthax: \[ a \otimes b = \sum_{c}{N_{a\,b}^c \, c} \] Here the left hand side stands for fusion of two systems with topological charges $a$ and $b$. The $\sum_{c}$ on the right is a disjunction indexed by all possible outcomes ($c$) of fusing of the two systems. $N_{a\,b}^c$ is the multiplicity of the corresponding outcome $c$. Its meaning being: if a pair of anyons of type $a$ and $b$ happened to have fused to the charge $c$ then their collective state space have reduced to an $N_{a\,b}^c$-dimensional Hilbert space. \begin{example} \footnote{Incomplete set of rules is sufficient for our purposes.} \label{ex:mp:fusion:rules} The following three rules are among the fusion rules of the \emph{metaplectic anyon} theory: \begin{equation} \label{eq:mp:rule1} \forall c \in \{1,Z,X,X',Y\}, \, c \otimes 1 = 1 \otimes c = c \end{equation} \begin{equation} \label{eq:mp:rule2} X \otimes X = 1 \, + Y \end{equation} \begin{equation} \label{eq:mp:rule3} Y \otimes Y = 1 \, + Z \, + Y \end{equation} \end{example} To simplify the matters, we allow only multiplicities of $1$ below. Suppose $(a_1,\, a_2, \ldots, a_n)$ is an ensemble of anyons and a sequence of topological charge measurements has been selected that defines a certain fusion tree structure. Then the number of distinct fusion trees that are allowed by the fusion rules is precisely the dimension of the Hilbert state space $H$ of the ensemble, and there exists a basis in $H$ whose elements are labeled by those distinct fusion trees. We will describe a basis like this in the beginning of the next subsection. While fusion bases are suitable for encoding the quantum information, natively topologically protected gates on such encodings can be derived from \emph{braiding} of non-Abelian anyons. Quite simply, \emph{braiding} is either an exchange of two distinct anyons in an ensemble or moving a single anyon along a complete closed loop. In general, braiding causes a non-trivial unitary action on the state space. By definition of "non-Abelian" these actions caused by different exchanges do not have to commute and the corresponding sets of unitary operators are not simultaneously diagonalizable. This creates the opportunity for building interesting and useful groups of unitary gates from braiding operations. Such groups are not always universal for quantum computation. Braiding happens to be universal in case of Fibonacci anyons (\cite{FreedmanKitaev}), and in the case of metaplectic anyons below universality can be achieved with a little help from measurement. \subsection{Metaplectic basis and metaplectic circuits} \label{subsec:metaplectic:basis} Metaplectic anyon model is defined in \cite{CuiWang} as an idealized multi-qutrit model, where each qutrit is encoded using a specific quartet of $SU(2)_4$ anyons and thus an $n$-qutrit quantum register is encoded using $4\,n$ anyons. The model allows five values of topological charge $\{1,Z,X,X',Y\}$ and the relevant subset of fusion rules has been already listed in example \ref{ex:mp:fusion:rules}. We encode a standard qutrit using a quartet of anyons of type $X$ prepared such that their joint topological charge is $Y$. The corresponding basis states can be labeled by fusion trees such as shown on Fig. \ref{fig:fusion:tree:one} with the $c_{14}=Y$ constraint. It follows from the fusion rules that $(c_{12},c_{34}) \in \{ (1,Y),(Y,1),(Y,Y)\}$. \begin{figure}[bt] \includegraphics[width=3.5in]{Fusion2b.pdf} \caption{\label{fig:fusion:tree:two} A fusion tree for $8$ anyons. The overall charge assumed to be $Y$. There are six fusion charges defining specific fused state.} \end{figure} One can do similar analysis on the state space $H$ of $8$ anyons of type $X$ prepared such that their overall topological charge is $Y$. The possible charges that label a basis in $H$ are shown on Fig. \ref{fig:fusion:tree:two}. Under the constraint $c_{14} = c_{58} = Y$ the system is reduced to a state in a $9$-dimensional subspace $H' \subset H$ with an obvious ad hoc isomorphism of this subspace and $H_3 \otimes H_3$, where $H_3$ is the state space of the standard qutrit. We use $H'$ to encode a standard two-qutrit register and call it the \emph{computational subspace}. It is not difficult to compute the dimension of $H$. As per fusion rules (\ref{eq:mp:rule1},\ref{eq:mp:rule2},\ref{eq:mp:rule3}) and by combinatorial enumeration, $\dim H = 21$. Thus $H'$ is a proper subspace of co-dimension $12$. This analysis generalizes in a natural way to multi-qutrit encodings with more than two qutrits. One should be cognizant that braiding of anyons from quartets encoding different qutrits (cf. Fig. \ref{fig:fusion:tree:two}) does not, in general, preserve the computational subspace, therefore we should only be deriving the multi-qutrit gates from the subgroup of braids that do preserve $H'$. The actual derivation of primitive gates is beyond the scope of this paper. Below we summarize the designs developed in \cite{CuiWang}. Consider the one-qutrit fusion basis $\{ |1,Y\rangle,|Y,1\rangle,|Y,Y\rangle\}$ introduced in the beginning of this subsection and relabel it as $\{|0\rangle = - |Y,Y\rangle, \, |1\rangle = |1,Y\rangle, \, |2\rangle = |Y,1\rangle\}$ (the minus sign leads to nicer algebra). Introduce $\omega=e^{ 2 \pi \, i/3}$ and $\gamma=e^{\pi \, i/12}$. Braiding of the anyons constituting a qutrit amounts to a finite-image representation of the braid group $B_4$ where the generators of $B_4$ are represented by the following unitaries in the above basis: $\sigma_1 = \gamma\, diag(1,\omega,1)$, $\sigma_3 = \gamma\, diag(1,1,\omega)$, $\sigma_2 = \gamma^3 \, s_2; \, s_2 = \frac{1}{\sqrt{3}} \, \left(\begin{array}{ccc} 1 & \omega & \omega \\ \omega& 1 & \omega \\ \omega & \omega & 1 \end{array}\right)$ \begin{comment} \frac{\gamma}{3} \, \left(\begin{array}{ccc} \omega+2 & \omega-1 & \omega-1 \\ \omega-1& \omega+2 & \omega-1 \\ \omega-1 & \omega-1 & \omega+2 \end{array}\right)$ The $R_{|2\rangle}$ gate (aka $\mbox{Flip}$) is the \emph{reflection} operator w.r.t. the standard basis vector $|2\rangle$ : $R_{|2\rangle} = diag(1,1,-1)$. It is important to note that $\sigma_2$ is equal to $s_2 = \frac{1}{\sqrt{3}} \, \left(\begin{array}{ccc} 1 & \omega & \omega \\ \omega& 1 & \omega \\ \omega & \omega & 1 \end{array}\right)$ times the global phase factor $\gamma \, (\omega+2)/\sqrt{3}$. \end{comment} We observe that, up to global phase, $\sigma_1$ is equivalent to the $Q_1=diag(1,\omega,1)$ , $\sigma_3$ is equivalent to $Q_2=diag(1,1,\omega)$ and $\sigma_2$ is equivalent to the $s_2$. For completeness we also need classical transpositions of the qutrit basis. By direct computation, $\tau_{0,1} = i \, (\sigma_3 \, \sigma_2 \, \sigma_3)^2; \tau_{0,2} = i \, (\sigma_1 \, \sigma_2 \, \sigma_1)^2$ where $\tau_{j,k}$ is the $|j\rangle \leftrightarrow |k\rangle$ transposition. Obviously $\tau_{0,1},\,\tau_{0,2}$ generate a faithful representation of the symmetric group $S_3$ on the qutrit and in particular, in terms of notations of \cite{CuiWang} we have $Q_0=\tau_{0,1} \, \sigma_1 \, \tau_{0,1}^{\dagger} = \tau_{0,2} \, \sigma_3 \, \tau_{0,2}^{\dagger}; \,\mbox{INC}= \tau_{0,2} \, \tau_{0,1}; \, \mbox{INC}^{\dagger}= \tau_{0,1} \, \tau_{0,2}$. In the two-qutrit encoding explained above there is a certain braid explicitly composed out of $92$ anyon exchanges that preserves the computational subspace and, in the $|j\rangle \otimes |k\rangle, \, j,k = 0,1,2$ basis implements the following entangler: \[\mbox{SUM} |j,k\rangle = |j,(j+k) \mod 3\rangle\] \noindent which is a natural qutrit generalization of the $\mbox{CNOT}$. It turns out that the gates designed above are not sufficient for the universal quantum computation, as per \cite{Gottesman}. They are known to generate a finite group that is projectively equivalent to the two-qutrit Clifford group. However the \emph{reflection gate} \footnote{also called $\mbox{Flip}[2]$ gate elsewhere} \[R_{|2\rangle} = diag(1,1,-1)\] \noindent is outside the Clifford group and thus provides universality when added to the above gates. The other two single-qutrit axial reflection operators are classically equivalent to $R_{|2\rangle}$: $R_{|0\rangle} = \tau_{0,2} \, R_{|2\rangle} \, \tau_{0,2}^{\dagger}; \, R_{|1\rangle}= \tau_{0,1} \, R_{|0\rangle} \, \tau_{0,1}^{\dagger}$. We collectively call these reflections the $R$-\emph{gates}. An $R$-gate is implemented exactly via a certain \emph{measurement-assisted repeat-until-success circuit with two ancillary qutrits}, as described in \cite{CuiWang}, Lemma 5. The circuit performs a probabilistic protocol that succeeds in $3$ iterations on average (with the variance of the iterations to success equal to $6$). This is the most expensive protocol in our set so far \footnote{Although not nearly as expensive as a magic state distillation}, and for the purposes of resource estimation, we take the following \emph{Assumption.} The cost of performing any braiding-only (generalized Clifford) gate, including the $\mbox{SUM}$ is trivial compared to the cost of performing an $R$-gate. Therefore we will be using the $R$-\emph{count} as the measure of the cost of a quantum circuit. \begin{definition} A circuit composed of unitary gates introduced in this section is called a \emph{metaplectic circuit}. The $R$-count of a metaplectic circuit is the minimal number of $R$-gates in all equivalent representations of the circuit. \end{definition} All the generators of metaplectic circuits are defined by matrices that are populated with algebraic numbers, and it follows from \cite{BG3} that the generator set is \emph{efficiently universal}, meaning that for any target unitary operator $G$ and small enough desired approximation precision $\varepsilon$ there exists a circuit of depth in $O(\log(1/\varepsilon))$ that approximates $G$ to precision $< \varepsilon$. The main purpose of this paper is to develop actual classically feasible algorithm for finding such efficient approximating circuits. \subsection{Useful additional gates.} Here we expand the metaplectic basis defined in section \ref{subsec:metaplectic:basis} with additional useful gates. 1) $P$ \emph{gates}. $P_j = I - (\omega^2+1) |j\rangle \langle j| = R_{|j\rangle} Q_j^2, j=0,1,2$ By design a $P$ gate has the $R$-count of 1. Any odd power of a $P$ gate also has the $R$-count of 1 while an even power of a $P$ gate has $R$-count of 0. Here is a useful observation regarding the cost of $P$ gate sequences: \begin{observ} \label{observ:two:P} Any gate in the group generated by $\{P_0, P_1, P_2\}$ can be effectively represented as a product of the global phase in $\{\pm 1\}$ and a circuit of the $R$-count of at most $1$. \end{observ} \begin{proof} Clearly $diag(-1,-1,-1)$ is identity up to the global phase of $(-1)$ and has the $R$-count of $0$. Similarly each of the gates $f_{01} = diag(-1,-1,1), \, f_{02} = diag(-1,1,-1), f_{12} = diag(1,-1,-1)$ is an $R$ gate up to the global phase of $(-1)$ and has the $R$-count of $1$. Now, any gate in the group generated by $\{P_0, P_1, P_2\}$ is of the form $diag((-\omega^2)^{d_0},(-\omega^2)^{d_1},(-\omega^2)^{d_2})= diag((-1)^{d_0},(-1)^{d_1},(-1)^{d_2}) \times diag(\omega^{2\,d_0},\omega^{2\,d_1},\omega^{2\,d_2})$. The second factor in this product has the $R$-count of $0$ by convention and the first factor is either $\pm I$ or one of the $R$ gates or one of the $f_{01}, \, f_{02},\, f_{12}$ gates and has the $R$-count of at most $1$. \end{proof} 2) $\mbox{SWAP}$ gate. While it is intuitively clear that the two-qutrit $\mbox{SWAP}$ gate can be performed by pure braiding, a direct computation leads to the following \begin{observ} \label{observ:SWAP} $\mbox{SWAP} = $ $(\tau_{1,2} \otimes I) \mbox{SUM}_{1,2} \mbox{SUM}_{2,1} \mbox{SUM}_{2,1} \mbox{SUM}_{1,2}$ \end{observ} Here $\tau_{1,2}$ is the single-qutrit transposition $|1\rangle \leftrightarrow |2\rangle$ (that can be expressed through already available transpositions as $\tau_{1,2}= \tau_{0,2} \tau_{0,1} \tau_{0,2}$). By the usual notation convention here and everywhere the $\mbox{SUM}_{j,k}$ in multi-qutrit context is a shorthand for the two-qutrit sum gate applied to $j$-th qutrit as the control and $k$-th qutrit as the target (tensored with the identity gates on all other qutrits). \bigskip 3) Axial reflection. The following is key for our circuit synthesis: \begin{definition} Consider an integer $n\geq 1$ and let $|j\rangle, j = 0,\ldots,3^n-1$ be an element of standard $n$-qutrit basis. The operator $R_{|j\rangle} = I^{\otimes n} -2 \, |j\rangle \langle j|$ is called an $n$-qutrit \emph{axial reflection} (operator). \end{definition} Clearly it is indeed a reflection w.r.t. the hyperplane orthogonal to $|j\rangle$. \section{Exact Single-Qutrit and Approximate Two-Level States.} Consider the field of \emph{Eisenstein rationals} $\mathbb Q(\omega)$ which is a quadratic extension of $\mathbb Q$. $\mathbb Z[\omega]$ is its integers ring called the ring of \emph{Eisenstein integers}. $\mathbb Z[\omega]$ has the group of units isomorphic to $\mathbb Z_6$ \ generated by $-\omega^2 = 1 + \omega$. The two core tools needed for effective synthesis of metaplectic circuits are described in Lemmas \ref{lem:core:short:column} and \ref{lem:multi:two:level} below. \begin{lemma} ["Short column lemma"] \label{lem:core:short:column} Consider a unitary single-qutrit state $|\psi\rangle = (u \, |0\rangle + v \, |1\rangle + w \, |2\rangle)/\sqrt{-3}^L$ where $u,v,w \in \mathbb Z[\omega]; L \in \mathbb Z$. 1) There is an effectively synthesizable metaplectic circuit $c$ with the $R$-count at most $L+1$ such that $c \, |\psi\rangle \in \{|0\rangle, |1\rangle, |2\rangle\}$. 2) The classical cost of finding such a circuit is linear in $L$. \end{lemma} Before proving the lemma, we need to handle one special case and make one algebraic observation. \begin{lemma} [Special case.] \label{lem:zero:exponent} If $|\psi\rangle$ is a unitary state the coefficients of which in computational basis are Eisenstein integers, then 1) One and only one coefficient is non-zero; 2) This non-zero coefficient is an Eisenstein integer unit; 3) $|\psi\rangle$ can be reduced to one of the computational basis states using at most one $P$ gate. \end{lemma} \begin{proof} If $\psi_0, \ldots, \psi_N$ are the coefficients, then $\sum_{j=0}^{N} |\psi_j|^2 = 1$. Since for any $j$ , $|\psi_j|^2$ is a non-negative integer, all the coefficients, except one, some $\psi_{j_*}$, must be zeros, while $|\psi_{j_*}|^2=1$ and hence $\psi_{j_*}$ is a unit in $\mathbb Z[\omega]$. Therefore $\psi_{j_*} = (-\omega^2)^ d$ and $(-\omega^2)^{-d \mod 6} \, \psi_{j_*} = 1$. Hence it is easy to find a $P$ gate of the form $G=I\otimes \ldots P_j^{-d \mod 6} \ldots \otimes I$ such that $G |\psi\rangle$ is a standard basis vector. \end{proof} Let us introduce the finite ring $\mathbb Z_3[\omega] = \mathbb Z[\omega]/(3\,\mathbb Z[\omega])$. This is a ring with exactly nine elements $\{0,1,2,\omega, 2\, \omega, 1+\omega, 1+ 2\, \omega, 2+\omega, 2+ 2\, \omega\}$. Let $\rho : \mathbb Z[\omega] \rightarrow \mathbb Z_3[\omega]$ be the natural epimorphism. By construction, its kernel consists of elements that are divisible by $3$. Both the complex conjugation $*: \mathbb Z[\omega] \rightarrow \mathbb Z[\omega]$ and the norm map $|*|^2 : \mathbb Z[\omega] \rightarrow \mathbb Z$ can be consistently factored down to the morphism $\tilde{*}: \mathbb Z_3[\omega] \rightarrow \mathbb Z_3[\omega]$ and the reduced norm map $\tilde{|*|^2} : \mathbb Z_3[\omega] \rightarrow \mathbb Z_3$ (since both $\rho \, *$ and $|*|^2 \mod 3$ annihilate the kernel of $\rho$). For the benefit of several future constructions we need to analyze the action of the group of Eisenstein units $EU=\{-\omega^2\}$ on $\mathbb Z_3[\omega]$. \begin{observ} \label{observ:orbits} $\mathbb Z_3[\omega]$ is split into three orbits under the action of the group $EU$ as follows: 0) The one-element orbit $O_0$ of $0$; Note that $|0|^2=0$ 1) The six-element orbit $O_1$ of $1$; Note that for any $z \in O_1$, $|z|^2 = 1 \mod 3$. 2) The two-element orbit $O_2$ of $1+2\,\omega$; Note that for any $z \in O_2$, $|z|^2 = 0 \mod 3$. \end{observ} This split is established by direct computations. \begin{proof} (Of the "Short column lemma"). We will be proving the lemma by induction on $L$. For $L=0$ the claim follows from the lemma \ref{lem:zero:exponent}. Consider a state with denominator exponent $L>0$. Note that $\sqrt{-3}=1+2\, \omega$ and thus it is an Eisenstein integer. It follows, of course that $3 = -(1+2\,\omega)^2$ and thus $3$ is divisible by both $1+2\,\omega$ and $(1+2\,\omega)^2$ in $\mathbb Z[\omega]$. The state $|\psi\rangle$ is immediately reducible to a state of the form $1/\sqrt{-3}^{L-1} (u'\, |0\rangle+ v'\, |1\rangle + w'\, |2\rangle)$ if each of $u,v,w$ is divisible by $1+2\,\omega$ and it is immediately reducible to a state of the form $1/\sqrt{-3}^{L-2} (u''\, |0\rangle+ v''\, |1\rangle + w''\, |2\rangle)$ if each of $u,v,w$ is divisible by $3$ in $\mathbb Z[\omega]$. From the unitariness condition on $|\psi\rangle$ we have $|u|^2+|v|^2+|w|^2=3^L$. Given $L>0$, then $3^L \mod 3=0$ and thus $(|u|^2 \mod 3)+(|v|^2 \mod 3)+(|w|^2 \mod 3)=0$. By direct computation we check, however, that for any $z \in \mathbb Z[\omega]$, $|z|^2 \mod 3$ is either $0$ or $1$. By simple exclusion argument for $(|u|^2 \mod 3)+(|v|^2 \mod 3)+(|w|^2 \mod 3)=0$ to hold, either all the summands must be $0$ or all the summands must be $1$. Let us distinguish the two cases. Case $0$: $(|u|^2 \mod 3)=(|v|^2 \mod 3)=(|w|^2 \mod 3)=0$ As per the above observation \ref{observ:orbits} the residues $\rho(u), \rho(v), \rho(w)$ belong to the union of orbits $O_0$ and $O_2$. In the edge case when all three belong to the orbit $O_0$ , each of the $u,v,w$ is divisible by $3$. As per earlier remark, $|\psi\rangle$ is reducible to the case of denominator exponent $L-2$ and we do not need to apply any gates for this reduction. More generally within the case $0$ each of the residues $\rho(u), \rho(v), \rho(w)$ is divisible by $\rho(1+2\,\omega)$. However if $\rho(z)$ is divisible by $\rho(1+2\,\omega)$ then $z$ is divisible by $1+2\,\omega$ in the $\mathbb Z[\omega]$. Indeed , the divisibility of the residue implies that $z=(1+2\,\omega)\, z' + 3\,z'', \, z', z'' \in \mathbb Z[\omega]$, but, as we noted, $3$ is divisible by $1+2\,\omega$ in the $\mathbb Z[\omega]$. Thus the general subcase allows reduction to the denominator exponent $L-1$ without application of any gates. Case $1$ : $(|u|^2 \mod 3)=(|v|^2 \mod 3)=(|w|^2 \mod 3)=1$. We are going to find a short circuit $c_L$ of $R$-count at most $1$ such that $c_L \, |\psi\rangle$ is reduced to a case with denominator exponent at most $L-1$. (This would complete the induction step.) Suppose first that $\rho(v)= \rho(w) = \omega^2 \, \rho(u) \in \mathbb Z_3[\omega]$, which means that $v=\omega^2 \, u + 3 \, v', \, w = \omega^2 \, u + 3 \, w'$ for some $v',w' \in \mathbb Z[\omega]$ and it follows that $s_2 \, |\psi\rangle = (-(u + \omega \, v' + \omega \, w')\, |0\rangle - (v' + \omega \, w') \, |1\rangle - (\omega\, v'+w') \, |2\rangle))/\sqrt{-3}^{L-1}$. Thus, in this particular special case the denominator exponent is reduced to $L-1$ by application of the single $s_2$ gate that has $R$-count $0$. In general, since $(|\omega^2 \, u|^2 \mod 3)=(|u|^2 \mod 3)=(|v|^2 \mod 3)=(|w|^2 \mod 3)=1$, then $\omega^2 \, \rho(u),\rho(v),\rho(w)$ must belong to the same orbit $O_1$ of the unit group $EU$. This means, in particular we can effectively find integers $d_v, d_w$ such that $\omega^2 \, \rho(u)=\rho((-\omega^2)^{d_v}\,v)=\rho((-\omega^2)^{d_w}\,w)=r \in \mathbb Z_3[\omega]$. Hence the short circuit $c_L = s_2\, P_1^{d_v} \, P_2^{d_w}$ reduces the state as shown . As per the observation \ref{observ:two:P}, $P_1^{d_v} \, P_2^{d_w}$ in this circuit is equivalent to a circuit of $R$-count at most $1$ up to the possible global phase of $\pm 1$. This completes the induction step. \end{proof} \begin{example} Consider unitary column $|K\rangle = ((2 + i \,\sqrt{3})\,|0\rangle + |1\rangle+ |2\rangle )/3$. $|K\rangle$ is reduced to basis state at $R$-count of $2$ as follows: $s_2 \, R_{|0\rangle}\, Q_1^2\, Q_2^2 \, s_2 \, R_{|0\rangle}\, |K\rangle = |0\rangle$ Note that $s_2 \, R_{|0\rangle}\, Q_1^2\, Q_2^2 \, s_2 \, R_{|0\rangle} = -\omega \, \sigma_2 \, R_{|0\rangle}\, \sigma_1^2 \, \sigma_3^2 \, \sigma_2 \, R_{|0\rangle}$. \end{example} Below we present the method suggested by the lemma \ref{lem:core:short:column} in algorithmic format \begin{algorithm}[H] \caption{Reduction of a short unitary column} \label{alg:short:column:reduction} \algsetup{indent=2em} \begin{algorithmic}[1] \REQUIRE{$L \in \mathbb Z$, $u,v,w \in \mathbb Z[\omega]$} \STATE { $ret \gets \langle \mbox{empty} \rangle$ } \WHILE {$L > 0$} \STATE{$\{\nu u, \nu v, \nu w \}=\{ |u|^2, |v|^2, |w|^2\} \mod{3}$} \IF {$\nu u =\nu v = \nu w = 1$} \STATE{Find $d_v, d_w \in \{-2,-1,0,1,2,3\}$ such that} \STATE{ $ \omega^2 \, u \equiv (-\omega^2)^{d_v} v \equiv (-\omega^2)^{d_w} w \mod 3 $} \STATE{$\{u,v,w\} \gets \{u,(-\omega^2)^{d_v} v,(-\omega^2)^{d_w} w\}$} \STATE {$v' \gets (v - \omega^2\, u)/3; w' \gets (w - \omega^2\, u)/3$} \STATE{$\{u,v,w\} \gets$ } \STATE{$ \{- (u+ \omega \, v'+\omega \, w'), -(v' + \omega\, w'), -(\omega\, v' + w') \}$} \STATE {$ret \gets s_2\, P_1^{d_v} \, P_2^{d_w} \, ret$} \ELSE \STATE {$\{u,v,w\} \gets \{u,v,w\}/(2\, \omega+1)$} \ENDIF \STATE{$L \gets L-1$} \ENDWHILE \STATE{ Implied $L=0$; Only one of $u,v,w$ is non-zero.} \STATE{ Find classical $g$ s. t. $g (u |0\rangle + v |1\rangle + w |2\rangle) = u' |0\rangle$} \STATE{ Find $d \in \{-2,-1,0,1,2,3\}$ such that $(-\omega^2)^d = u'$ } \RETURN { $P_0^{-d} \, g \, ret$ } \end{algorithmic} \end{algorithm} \begin{lemma} \label{lem:core:two:level:state} Consider a "two-level" unitary single-qutrit state $|\phi\rangle =x \, |0\rangle + y \, |1\rangle + z \, |2\rangle$ where $x \, y \, z = 0$ and let $\varepsilon$ be an arbitrarily small positive number. 1) There is a family of effectively synthesizable states of the form $|\psi_{\varepsilon}\rangle = (u_{\varepsilon} \, |0\rangle + v_{\varepsilon} \, |1\rangle + w_{\varepsilon} \, |2\rangle)/\sqrt{-3}^{L_{\varepsilon}}$; $u_{\varepsilon},v_{\varepsilon},w_{\varepsilon} \in \mathbb Z[\omega]; L_{\varepsilon} \in \mathbb Z$ such that $|\psi_{\varepsilon}\rangle$ is an $\varepsilon$-approximation of $|\phi\rangle$ and $L_{\varepsilon} \leq 4\,\log_3(1/\varepsilon)+O(\log(\log(1/\varepsilon)))$. 2) The expected average classical cost of finding each $|\psi_{\varepsilon}\rangle$ is polynomial in $\log(1/\varepsilon)$. \end{lemma} A proof of this lemma is found in Appendix \ref{sec:single:qutrit:approx}. The proof is very technical. It combines elementary geometry with rather profound number theory, which is based on a mild number-theoretical hypothesis (conjecture \ref{conj:norm:eq:solvability}). It follows from the two lemmas that a two-level unitary state can be prepared with precision $\varepsilon$ from a standard basis state using a metaplectic circuit of $R$-count at most $4\,\log_3(1/\varepsilon)+O(\log(\log(1/\varepsilon)))$ and in fact this readily generalizes to multiple qutrits as follows: \begin{lemma} ["Two-level approximation lemma"] \label{lem:multi:two:level} Consider an integer $n\geq 1$ and let $|\phi\rangle$ be a unitary $n$-qutrit state that has at most two non-zero components in the standard $n$-qutrit basis. For arbitrarily small $\varepsilon>0$ 1) There is an effectively synthesizable metaplectic circuit $c$ with the $R$-count at most $4\,\log_3(1/\varepsilon)+O(\log(\log(1/\varepsilon)))$ such that $c \, |0\rangle$ is an $\varepsilon$-approximation of $|\phi\rangle$. 2) The expected average classical cost of finding such a circuit is polynomial in $\log(1/\varepsilon)$. \end{lemma} Before proving the lemma we need two lesser technical facts that are useful in their own right: \begin{lem} \label{lem:transitive:on:axial} Let $|b_1\rangle$ and $|b_2\rangle$ be two standard $n$-qutrit basis states. There exists an effectively and exactly representable classical permutation $\pi$ such that $|b_2\rangle= \pi \, |b_1\rangle$ \end{lem} \begin{proof} In the case of $n=1$ the $\mathbb Z_3$ group generated by $\mbox{INC}$ acts transitively on the standard basis $\{|0\rangle,\, |1\rangle,\, |2\rangle\}$. Consider $|b_k\rangle= |(b_k)_1,\ldots , (b_k)_n\rangle, \, n\geq 1, \, k=1,2$. Let $\pi_j \in \{I,\mbox{INC} , \mbox{INC}^2\}$ be such that $\pi_j |(b_1)_j\rangle = |(b_2)_j\rangle, j = 1,\ldots, n$. Then $\pi = \otimes_{j=1}^{n} \pi_j$ is the desired permutation. \end{proof} \begin{lem} \label{lem:two:basis:vectors} 1) For any two standard $n$-qutrit basis vectors $|j\rangle$ and $|k\rangle$ there exists a classical effectively representable metaplectic gate $g$, such that for $|j'\rangle= g |j\rangle$ and $|k'\rangle= g |k\rangle$ we have $|j' - k'| < 3$. 2) Such a gate $g$ can be effectively represented with at most $(n-1)$ instances of the $\mbox{SUM}$, $\mbox{SUM}^{\dagger}$ or $\mbox{SWAP}$ gates. \end{lem} In other words, digital representations of $j'$ and $k'$ base $3$ are the same except possibly for the least-significant base-$3$ digit. \begin{proof} At $n=1$ there is nothing to prove. Given Lemma \ref{lem:transitive:on:axial}, for $n=2$ the general pair of basis vectors can be reduced to the case where $|j\rangle = |00\rangle$. When $|k\rangle = |0,k_1\rangle$ no further transformations are needed, when $|k\rangle = |k_0,0\rangle$ a single $\mbox{SWAP}$ suffices. The remaining cases are covered by $\mbox{SUM}_{2,1}^{\dagger} |11\rangle = \mbox{SUM}_{2,1} |21\rangle = |01\rangle$, $\mbox{SUM}_{2,1} |12\rangle = \mbox{SUM}_{2,1}^{\dagger} |22\rangle = = |02\rangle$. Suppose $n>2$ and the lemma has been proven for multi-qutrit vectors in fewer than $n$ qutrits. Let $|j\rangle=|j_1\, \ldots, j_{n-1}, j_n\rangle , |k\rangle=|k_1\, \ldots, k_{n-1}, k_n\rangle$ be base-$3$ representations of the two vectors. By induction hypothesis, one can effectively find $(n-1)$-qutrit classical metaplectic gate $g_{n-1}$ such that $(g_{n-1} \otimes I) |j_1\, \ldots, j_{n-1}, j_n\rangle = |\ldots, j'_{n-1}, j'_n\rangle$ and $(g_{n-1} \otimes I) |k_1\, \ldots, k_{n-1}, k_n\rangle= |\ldots, k'_{n-1}, k'_n\rangle$ may differ only at $(n-1)$-st and $n$-th position. Select a two-qutrit classical gate $g_2$, as shown above, such that $g_2 |j'_{n-1}, j'_n\rangle$ and $g_2 |k'_{n-1}, k'_n\rangle$ differ only in the last position. Then, by setting $g=(I^{\otimes (n-2)}\otimes g_2)(g_{n-1} \otimes I)$ we complete the induction step. \end{proof} \begin{proof} (Of the two-level state approximation lemma.) We start by reducing $|\phi\rangle$ to the form $x \, |a_1\ldots a_{n-1},d\rangle + z \, |a_1\ldots a_{n-1},f\rangle, \, a_1,\ldots,a_{n-1},d,f \in \{0,1,2\}$ using a classical circuit $b$ described in Lemma \ref{lem:two:basis:vectors}. Let $e \in \{0,1,2\}$ be the "missing" digit such that $\{d,e,f\}$ is a permutation of $\{0,1,2\}$. Using Lemma \ref{lem:core:two:level:state} we can effectively approximate the single-qutrit state $x\,|d\rangle + z \, |f\rangle$ by an Eisenstein state of the form $|\eta\rangle = (u\,|d\rangle + v\, |e\rangle+ w \, |f\rangle)/\sqrt{-3}^k, u,v,w \in \mathbb Z[\omega], k \in \mathbb Z$ to precision $\varepsilon$ with $k \leq 4\, \log_3(1/\varepsilon)+O(\log(\log(1/\varepsilon)))$. Using Lemma \ref{lem:core:short:column} we can effectively synthesize a single-qutrit metaplectic circuit $c_1$ with $R$-count at most $k+1$ such that $c_1 \, |0\rangle = |\eta\rangle$. Let $c_n = (I^{\otimes(n-1)} \otimes c_1)$. Clearly $b^{\dagger} \, c_n \, |a_1\ldots a_{n-1},0\rangle$ is an $\varepsilon$-approximation of $|\phi\rangle$. But $|a_1\ldots a_{n-1},0\rangle$ can be prepared exactly from $|0\rangle$ using at most $n-1$ local $\mbox{INC}$ gates, which finalizes the desired circuit. \end{proof} \begin{corol} \label{corol:two:level:reflections} Consider an integer $n\geq 1$ and let $|\phi\rangle$ be a unitary $n$-qutrit state that has at most two non-zero components in the standard $n$-qutrit basis and consider the corresponding Householder reflection operator $R_{|\phi\rangle} = I^{\otimes n} -2 \, |\phi\rangle \langle \phi|$. For arbitrarily small $\varepsilon>0$ 1) There is an effectively synthesizable metaplectic circuit $c$ with the $R$-count at most $4\,\log_3(1/\varepsilon)+O(\log(\log(1/\varepsilon)))$ such that $ c \, R_{|\overline{0}\rangle} \, c^{\dagger}$ is a $\varepsilon$-approximation of $R_{|\phi\rangle}$.(Where $|\overline{0}\rangle = |0\rangle^{\otimes n}$.) 2) The expected average classical cost of finding such a circuit is polynomial in $\log(1/\varepsilon)$. \end{corol} \begin{proof} As per \cite{Kliuchnikoff}, if the distance between state $|\phi\rangle$ and $|\psi\rangle$ is less than $\varepsilon/(2\,\sqrt{2})$ , then the distance between $R_{|\phi\rangle}$ and $R_{|\psi\rangle}$ is less than $\varepsilon$. Using Lemma \ref{lem:multi:two:level} one can effectively find a metaplectic circuit $c$ with the $R$-count in $4\,\log_3(1/\varepsilon)+O(\log(\log(1/\varepsilon)))$ such that $c \, |\overline{0}\rangle $ approximates $|\phi\rangle$ to precision $\varepsilon/(2\,\sqrt{2})$ and the corollary follows. \end{proof} This result applies in a straightforward manner to one-parameter special diagonal unitary: \begin{corol} \label{corol:two:level:diagonal} Consider an integer $n\geq 1$ and an $n$-qutrit diagonal operator of the form $D= I^{\otimes n} + (e^{i\, \theta}-1)\, |j\rangle \langle j| + (e^{-i\, \theta} -1)\, |k\rangle \langle k|$ where $j,k \in \{0,\ldots,3^n-1\}, j \neq k$. For arbitrarily small $\varepsilon>0$ there is an effectively synthesizable circuit at distance $< \varepsilon$ from $D$ composed out of at most two axial $n$-qutrit reflection operators and local metaplectic gates with the total $R$-count of 1) at most $8\,\log_3(1/\varepsilon)+O(\log(\log(1/\varepsilon)))$ when $n=1$, and 2) at most $16\,\log_3(1/\varepsilon)+O(\log(\log(1/\varepsilon)))$ when $n>1$. \end{corol} Indeed, the diagonal unitary of this form is equal to $r_1 \, r_2$ where $r_1= I^{\otimes n} - |j\rangle \langle j| - |k\rangle \langle k| + |j\rangle \langle k| + |k\rangle \langle j|$, $r_2= I^{\otimes n} - |j\rangle \langle j| - |k\rangle \langle k| + e^{ -i \, \theta}\, |j\rangle \langle k| + e^{ i \, \theta}\,|k\rangle \langle j|$ and both $r_1$ and $r_2$ are two-level reflection operators. We note that for $n=1$ the $r_1$ is a Clifford gate and has trivial cost. Since mult-qutrit axial reflection are going to grow in importance below, we offer a decomposition method for them in the next section. \section{Implementation of Axial Reflection Operators} \label{sec:axial:reflections} Let $|b\rangle$ be a standard $n$-qutrit basis state. Then an \emph{axial reflection operator} $R_{|b\rangle}$ is defined as $R_{|b\rangle}=I^{\otimes n} - 2\,|b\rangle \langle b|$ Clearly, $R_{|b\rangle}$ is represented by a diagonal matrix that has a $-1$ on the diagonal in the position corresponding to $|b\rangle$ and $+1$ in all other positions. As per Lemma \ref{lem:transitive:on:axial} any two axial reflection operators are equivalent by conjugation with an effectively and exactly representable classical permutation. Since we consider the cost of classical permutations to be negligible compared to the cost of the $R$ gates, we hold that for a fixed $n$ all the $n$-qutrit axial reflection operators have essentially the same cost. We are going to show in this section that all the $n$-qutrit axial reflection operators can be effectively and exactly represented. In view of the above if suffices to represent just one such operator for each $n$. We start with somewhat special case of $n=2$. \begin{observ} \label{observ:CFlip} The circuit $(I \otimes R_{|0\rangle}) \, \mbox{SUM} (I \otimes R_{|1\rangle}) \, \mbox{SUM} (R_{|2\rangle} \otimes R_{|2\rangle}) \, \mbox{SUM}$ is an exact representation of $(-1) R_{|20\rangle}$ \end{observ} This is established by direct matrix computation. We are going to generalize this solution to arbitrary $n \geq 2$ and note that the occurrence of the global phase $(-1)$ is exceptional and happens only at $n=2$. \begin{lemma} Given $n>2$ , denote by $\bar{2}$ in the context of this lemma a string of $n-2$ occurrences of $2$. Then the circuit $c_{20\bar{2}}=$ $(I \otimes R_{|0\bar{2}\rangle}) \, \mbox{SUM}_{1,2} \, (I \otimes I \otimes R_{|\bar{2}\rangle}) \, (I \otimes R_{|1\bar{2}\rangle}) $ $ \mbox{SUM}_{1,2} \, \mbox{SWAP}_{1,2} \, (I \otimes R_{|2\bar{2}\rangle}) \, \mbox{SWAP}_{1,2} \, (I \otimes R_{|2\bar{2}\rangle}) \, \mbox{SUM}_{1,2}$ is an exact representation of the operator $R_{|20\bar{2}\rangle}$. \end{lemma} \begin{proof} Let $|b\rangle$ be an element of the standard $n$-qutrit basis. The circuit consists of diagonal operators and three occurrences of $\mbox{SUM}_{1,2}$. Let $|b_1b_2\bar{b}\rangle$ be the ternary representation of $|b\rangle$ where $\bar{b}$ stands for the substring of the $n-2$ least significant ternary digits of $b$. It is almost immediate that the circuit $c_{20\bar{2}}$ represents a diagonal unitary. Indeed, when the input is $|b_1b_2\bar{b}\rangle$ we can only get $\pm |b_1b_2\bar{b}\rangle$, $\pm |b_1\,\mbox{INC}\,b_2\bar{b}\rangle$ or $\pm |b_1\,\mbox{INC}^2b_2\bar{b}\rangle$, up to swap, after applying each subsequent operator of the circuit, and clearly we can only get $\phi |b_1b_2\bar{b}\rangle, \, \phi=\pm 1$ after the entire circuit is applied. The lemma claims that $\phi=-1$ if and only if $b=20\bar{2}$. Consider the cases when $b_1=0$ or $b_1=1$. It is easy to see that, whatever is the value of $b_2$, one and only one of the operators $(I \otimes R_{|0\bar{2}\rangle}), (I \otimes R_{|1\bar{2}\rangle}), (I \otimes R_{|2\bar{2}\rangle})$ activates $R_{|\bar{2}\rangle}$ on $|\bar{b}\rangle$ and this activation always cancels out with $(I \otimes I \otimes R_{|\bar{2}\rangle})$ (since $R^2=$ identity for any reflection $R$). So the result is identity. If $b_1=2, b_2 \neq 0$ the five rightmost operations of the circuit produce $|2\rangle \otimes (\mbox{INC}^2 |b_2\rangle) \otimes (R_{|\bar{2}\rangle} |\bar{b}\rangle)$, an action that is subsequently canceled out by $I \otimes I \otimes R_{|\bar{2}\rangle}$. It is also easy to see that for $b_2=1$ or $b_2=2$ the remaining two reflections $R_{|0\bar{2}\rangle}$ and $R_{|1\bar{2}\rangle}$ amount to non-operations. Therefore the net result is identity. We are left with the important case of $b_1=2, b_2=0$. By definition, $\mbox{SUM}_{12} |20\bar{b}\rangle = |22\bar{b}\rangle$ and then the subsequence $\mbox{SWAP}_{1,2} \, (I \otimes R_{|2\bar{2}\rangle}) \, \mbox{SWAP}_{1,2} \, (I \otimes R_{|2\bar{2}\rangle})$ activates operator $R_{|\bar{2}\rangle}$ on $|\bar{b}\rangle$ twice, and of course these two activations cancel each other. We proceed with $\mbox{SUM}_{12} |22\bar{b}\rangle = |21\bar{b}\rangle$, and $I \otimes R_{|1\bar{2}\rangle}$ activates the $R_{|\bar{2}\rangle}$ on $|\bar{b}\rangle$ which is immediately cancelled out by the $I \otimes I \otimes R_{|\bar{2}\rangle}$. Finally $\mbox{SUM}_{12} |21\bar{b}\rangle = |20\bar{b}\rangle$, and $I \otimes R_{|0\bar{2}\rangle}$ activates $R_{|\bar{2}\rangle}$ on $|\bar{b}\rangle$ as desired. This applies the factor of $-1$ if and only if $\bar{b}=\bar{2}$, and that's what is claimed. \end{proof} Using this lemma we implement the operator $R_{|20\bar{2}\rangle}$ exactly by linear recursion. As we noted earlier, all the axial reflection operators in $n$ qutrits have the same $R$-count. Denote this $R$-count by $\mbox{rc}(n)$. \begin{observ} \label{obs:cost:axial:reflection} $\mbox{rc}(n)=\Theta((2+\sqrt{5})^n)$ when $n\rightarrow \infty$. \end{observ} \begin{proof} We have $\mbox{rc}(1)=1, \mbox{rc}(2)=4$ (see Observation \ref{observ:CFlip}). The recurrence $\mbox{rc}(n)=4\,\mbox{rc}(n-1)+\mbox{rc}(n-2), \mbox{rp}(1)=1, \mbox{rc}(2)=4$ can be solved in closed form as $\mbox{rc}(n)=((2+\sqrt{5})^n-(2-\sqrt{5})^n)/(2 \, \sqrt{5})$. Because $|2-\sqrt{5}|<1$ the $-(2-\sqrt{5})^n$ term is asymptotically insignificant. \end{proof} Thus the cost of the above exact implementation of the $n$-qutrit axial reflection operator is exponential in $n$. This defines several tradeoffs explored in the following sections. \section{Ancilla-free reflection-based universality} Consider integer $n\geq 1$. For the duration of this section we set $N=3^n$. \begin{lemma} \label{lem:general:diagonal:unitary} Given a diagonal unitary $D \in U(N)$ and arbitrarily small $\varepsilon>0$ there is an effectively synthesizable $\varepsilon$-approximation of $D$ composed of a global phase factor, at most $2 \, (N-1)$ axial reflection operators, and metaplectic local gates with the total $R$-count that is 1) $16 \, (\log_3(1/\varepsilon) + O(\log(\log(1/\varepsilon))))$ when $n=1$ and, 2) smaller than $16 \, (N-1)(\log_3(1/\varepsilon) + n + O(\log(\log(1/\varepsilon))))$ when $n>1$. \end{lemma} Indeed, a unitary diagonal $D$ is decomposed into a product of a global phase factor and $(N-1)$ special two-level diagonals as in corollary \ref{corol:two:level:diagonal}. Each of the latter diagonals needs to be approximated to precision $\varepsilon/(N-1)$ with $\log_3(1/(\varepsilon/(N-1))) < \log_3(1/\varepsilon) + n$. In \cite{WWJD} Jesus Urias offers an effective $U(2)$ parametrization of the $U(N)$ group, whereby any $U \in U(N)$ is factored into a product of at most $N(N-1)/2$ special Householder reflections and possibly one diagonal unitary. All reflections in that decomposition are two-level. \begin{comment} reflections that in their two non-trivial dimensions take up the form \[ \left(\begin{array}{cc} \sin(\varphi) & e^{i\,\theta} \, \cos(\varphi) \\ e^{-i\,\theta} \, \cos(\varphi) & -\sin(\varphi) \end{array}\right) \] \end{comment} This immediately leads to the following \begin{thm}{(General unitary decomposition, reflection style.)} \label{thm:new:decomposition:reflection} Given a $U \in U(N)$ in general position and small enough $\varepsilon>0$ the $U$ can be effectively approximated up to a global phase to precision $\varepsilon$ by ancilla-free metaplectic circuit with $R$-count of at most $4\,(N+4)(N-1)(\log_3(1/\varepsilon)+ 2\, n + O(\log(\log(1/\varepsilon))))$ and at most $(N+4)(N-1)/2$ axial reflections (in $n$ qutrits). \end{thm} \begin{proof} It follows from \cite{WWJD} that $U$ is effectively decomposed into $N\,(N-1)/2$ special Householder reflections and possibly a diagonal unitary $D \in U(N)$ that may add up to $2\,(N-1)$ such reflections (see Lemma \ref{lem:general:diagonal:unitary}) to the decomposition to a total of $(N+4)(N-1)/2$ reflections. Each of these allows an effective $\varepsilon/((N+4)(N-1)/2)$-approximation by a metaplectic circuit with the $R$-count of at most $8\, (\log_3(1/\varepsilon)+ 2\, n + O(\log(\log(1/\varepsilon))$ plus at most $2$ axial reflections as per Corollary \ref{corol:two:level:reflections}, and the cost bound claimed in the theorem follows. \end{proof} \begin{comment} This approximation scheme further applies to two-level special unitary operator, which is the crucial building block in what follows. Let $|j\rangle$ and $|k\rangle$ be two distinct elements of the standard $n$-qutrit basis. Then a \emph{special two-level unitary} with signature $[n;j,k]$ is a unitary operator of the form $I^{\otimes n} + (u-1) |j\rangle \langle j| + v\, |j\rangle \langle k| -v^*\,|k\rangle \langle j| + (u^*-1) |k\rangle \langle k|$ where $|u|^2+|v|^2=1$. It turns out that an $n$-qutrit axial reflection, such as used in Thm \ref{thm:new:decomposition:reflection}, can be represented \emph{exactly} by a metaplectic circuit. However our best known solution for that has a significant cost when $n$ is large. It is convenient to define the $0$-qutrit axial reflection $R_{|\rangle}$ as the global phase factor $(-1)$. With this we have the following inductive \begin{thm} \label{thm:exact:axial:reflection} Given $n \geq 2$ , denote by $\bar{2}$ a string of $n-2$ occurrences of $2$. (Empty string for $n=2$.) Then the circuit $c_{20\bar{2}}=$ $(I \otimes R_{|0\bar{2}\rangle}) \, \mbox{SUM}_{1,2} \, (I \otimes I \otimes R_{|\bar{2}\rangle}) \, (I \otimes R_{|1\bar{2}\rangle}) $ $ \mbox{SUM}_{1,2} \, \mbox{SWAP}_{1,2} \, (I \otimes R_{|2\bar{2}\rangle}) \, \mbox{SWAP}_{1,2} \, (I \otimes R_{|2\bar{2}\rangle}) \, \mbox{SUM}_{1,2}$ is an exact representation of the operator $R_{|20\bar{2}\rangle}$. \end{thm} Here $\mbox{SWAP}_{j,k}$ is the qutrit swapping operation. A simple $\mbox{SUM}/\mbox{INC}$ circuit for $\mbox{SWAP}$ is given in Appendix A. \end{comment} The best know cost of exact metaplectic implementation of an $n$-qutrit axial reflection is in $\Theta((2+\sqrt{5})^n)$ as per Observation \ref{obs:cost:axial:reflection}. This may become prohibitive when $n$ is large. In the next section we show how to curb the $R$-count at the cost of roughly doubling the width of the circuits. \section{Ancilla-assisted approximation of arbitrary unitaries} An alternative way of implementing a two-level unitary operator is through a network of strongly controlled gates. For $V \in U(3)$ introduce $C^n(V) \in U(3^{n+1})$ where $C^n(V)|j_1,\ldots,j_n,j_{n+1}\rangle = $ $ \begin{cases} |j_1,\ldots,j_n \rangle \otimes V |j_{n+1}\rangle, & j_1 = \cdots = j_n= 2 \\ |j_1,\ldots,j_n,j_{n+1}\rangle , & \mbox{otherwise.} \end{cases}$ The $C^1(\mbox{INC})$ gate, \begin{equation} \label{eq:new:CINC:gate} C^1(\mbox{INC})|j,k\rangle = |j,(k+\delta_{j,2}) \mod{3}\rangle \end{equation} is going to be of a particular interest in this context. Bullock et Al. \cite{BullockEtAl} offer a certain ancilla-assisted circuit that emulates $C^n(V)$ using only two-qudit gates. The circuit requires $n-1$ ancillary qutrits, $4\, (n-1)$ instances of the $C^1(\mbox{INC})$ gate (see equation (\ref{eq:new:CINC:gate})) and one single $C^1(V)$ gate. We do not believe that the classical $C^1(\mbox{INC})$ gate can be represented exactly and must resort to approximating $C^1(\mbox{INC})$ to desired precision. \begin{lem} \label{lem:new:CINC:approximation} $C^1(\mbox{INC})$ (as defined by (\ref{eq:new:CINC:gate})) can be approximated to precision $\varepsilon$ by a metaplectic circuit with $R$-count at most $16 \, \log_3(1/\varepsilon) + O(\log(\log(1/\varepsilon)))$ and $2$ two-qutrit axial reflections. \end{lem} \begin{proof} $C^1(\mbox{INC})$ is the composition of two reflection operators: $C^1(\mbox{INC}) = R_{|2\rangle \otimes v_{2}} \, R_{|2\rangle \otimes v_{0}}$ where $v_{0}=(|1\rangle-|2\rangle)/\sqrt{2}, \, v_{2}=(|0\rangle-|1\rangle)/\sqrt{2}$ and the lemma follows. \end{proof} \begin{corol} \label{corol:new:CnV:ancilla:assisted} Given a $V \in U(3)$, integer $n>0$ and a small enough $\varepsilon >0$, the $C^n(V)$ can be effectively emulated approximately to precision $\varepsilon$ by ancilla-assisted $2\, n$-qutrit circuit with $R$-count smaller than $64\, n \, (\log_3(1/\varepsilon)+O(\log(\log(1/\varepsilon))))$. \end{corol} It is easy to see from Lemma \ref{lem:two:basis:vectors} that any two-level $n$-qutrit unitary $W$ is effectively classically equivalent to some $C^{n-1}(\tilde{W})$ where $\tilde{W}$ is a certain (two-level) single-qutrit derivative of $W$. This applies, in particular, to the two-level Householder reflections that constitute the factors in the explicit $U(2)$ factorization of $U(3^n)$ (\cite{WWJD}). An upper bound for the cost of ancilla-assisted emulation of arbitrary $n$-qutrit unitary is summarized in the following \begin{thm}{(General unitary decomposition, ancilla-assisted.)} \label{thm:decomposition:ancilla:assited} Given a $U \in U(N)$ in general position and small enough $\varepsilon>0$ the $U$ can be effectively emulated up to a global phase to precision $\varepsilon$ by metaplectic circuit with $(n-2)$ ancillas and $R$-count smaller than $32\,(N+4)(N-1)(n-1)(\log_3(1/\varepsilon)+ 2\, n + O(\log(\log(1/\varepsilon))))$. \end{thm} \begin{proof} We can still exactly and effectively decompose $U$ into a global phase and at most $(N+4)(N-1)/2$ two-level Householder reflections (see the proof of Thm \ref{thm:new:decomposition:reflection}). But now we treat each two-level reflection as classical equivalent of a $C^{n-1}(V)$ where $V$ is a single-qutrit unitary. We emulate the each reflection as such using Corollary \ref{corol:new:CnV:ancilla:assisted} and the cost bound for the overall decomposition follows. \end{proof} \begin{comment} \begin{proof} Revisiting the proof of Thm \ref{thm:new:decomposition:reflection} we recall that the method of \cite{WWJD} requires $(N+4)(N-1)/2$ two-level reflection operators, each allowing ancilla-assisted emulation to precision $\varepsilon/((N+4)(N-1)/2)$ with $R$-count smaller than $64\, (n-1) \, (\log_3(1/\varepsilon)+ 2\, n + O(\log(\log(1/\varepsilon))))$. \end{proof} \end{comment} This synthesis procedure is summarized as pseudocode in Algorithm \ref{alg:decomp:multi:qutrit:ancilla} below. \begin{algorithm}[H] \caption{Ancilla-assisted decomposition of a general unitary.} \label{alg:decomp:multi:qutrit:ancilla} \algsetup{indent=2em} \begin{algorithmic}[1] \REQUIRE{$U \in U(3^n)$, $\varepsilon>0$} \STATE {$U = D \,\prod_{k=1}^{K} U_k$ as per \cite{WWJD}} \COMMENT{Diagonal $D$ and two-level $U_k$} \STATE{$\mbox{ret} \gets \mbox{decomposition}(D,\varepsilon)$ as per Corol. \ref{corol:new:CnV:ancilla:assisted}} \FOR{$k=1..K$} \STATE{$c \gets \mbox{decomposition}(U_k,\varepsilon)$ as per Corol. \ref{corol:new:CnV:ancilla:assisted}} \STATE{$\mbox{ret} \gets \mbox{ret} \, c$} \ENDFOR \RETURN {$ret$ } \end{algorithmic} \end{algorithm} \section{The overall synthesis algorithm flow.} Assuming ancillary qutrits are readily available, a decision point on choosing between the ancilla-free and ancilla-assisted decomposition strategies is defined by relative magnitudes of $(2+\sqrt{5})^n$ and $64 \, n \, \log_3(1/\varepsilon)$. Comparison of the upper bounds suggests that in practice the ancilla-free solution becomes prohibitively costly when $n > 7$. Otherwise the decision threshold in $\varepsilon$ is of the form $\varepsilon_n = \Omega(3^{-(2+\sqrt{5})^n/(64\,n)})$. The two strategies can be run in parallel on a classical computer with the best resulting circuit post-selected. This approach is shown schematically in Figure \ref{fig:new:parallel:flow}. \begin{figure}[bt] \includegraphics[width=3.5in]{flowChart.pdf} \caption{\label{fig:new:parallel:flow} Parallelizable control flow for the two flavors of the main algorithm.} \end{figure} \section{Simulation, theoretical lower bound and future work.} \label{sec:bound:future} The scaling of the cost of our metaplectic circuits is fully defined by the cost of approximating a two-level state. The $R$-count of a circuit performing an $\varepsilon$-approximation of the latter is in its turn defined by the denominator exponent $k$ of an approximating tri-level Eisenstein state $|\phi_k\rangle =(u \, |j\rangle + v \, |\ell\rangle + w \, |m\rangle)/\sqrt{-3}^k$. . We currently have $k$ upper-bounded by $4 \, \log_3(1/\varepsilon) + O(\log(\log(1/\varepsilon)))$. Our numerical simulation over a large set of randomly generated two-level targets, demonstrates that an approximation algorithm based solely on Lemma \ref{lem:core:two:level:state} yields $k$ extremely close to this upper bound in overwhelming majority of cases. A certain volume argument suggests a uniform lower bound for $k$ in $5/2 \, \log_3(1/\varepsilon) + O(\log(\log(1/\varepsilon)))$. Indeed for a given two-level target state $|\psi\rangle$ and its $\varepsilon$-approximation $|\phi_k\rangle$ the real vector $[Re(u),Im(u),Re(v),Im(v)]^T$ is found in a certain 4-dimension meniscus of 4-volume $\Theta(\varepsilon^5\, 3^{2\,k})$. If we expect, uniformly, each of these menisci to contain $\Theta(\log(1/\varepsilon))$ such vectors we need to have $\varepsilon^5\, 3^{2\,k}$ in $\Theta(\log(1/\varepsilon))$ and the above lower bound on $k$ follows. There is clearly a gap between our guaranteed cost leading term $4 \, \log_3(1/\varepsilon)$ and the cost' lower bound leading term $5/2 \, \log_3(1/\varepsilon)$ and we currently do not know whether (a) the lower bound is reachable at all using metaplectic circuits or, (b) if it is reachable, whether this can be done by a classically tractable algorithm. More theoretical (and possibly, simulation) work is needed to answer these questions. At stake here is potential practical reduction of the metaplectic circuitry cost by $37.5\%$. Another important open question is whether there is a set of exact metaplectic circuits for $n$-qutrit axial reflections with the $R$-count that is sub-exponential (preferrably, polynomial) in $n$. \section{Conclusion} We have addressed the problem of performing efficient quantum computations in a framework where quantum information is represented in multi-qutrit encoding by ensembles of certain weakly-integral anyons and the native quantum gates are represented by braids with a targeted use of projective measurement. We have developed two flavors of a classically feasible algorithm for the synthesis of efficient metaplectic circuits that approximate arbitrary $n$-qutrit unitaries to a desired precision $\varepsilon$. The first flavor of the algorithm produces circuits that are ancilla-free and asymptotically optimal in $\varepsilon$ (but may have additive entanglement overhead that is exponential in $n$). The second flavor produces circuits requiring roughly $n$ clean ancillas, has a depth overhead factor of approximately $n$, but may be, nevertheless, more efficient in practice when $n$ is large. The combined algorithm enables us to compile logical multi-qutrit circuits with the scalability properties comparable to the scalability of the recent crop of efficient logical circuits over multi-qubit bases such as Clifford+T, Clifford+V or Fibonacci. In summary, we have demonstrated that circuit synthesis for a prospective ternary topological quantum computer based on weakly-integral anyons can be done effectively and efficiently. This implicitly validates such prospective computer for the quantum algorithm development. Although we have achieved asymptotic optimality of the resulting circuits, there is some potential slack left in the practical bounds of leading coefficients for the circuit depths, as explained in the section \ref{sec:bound:future}. Investigating this presumed slack is one of our future research topics. \acknowledgements The authors wish to thank Martin Roetteler for useful discussions.
{ "timestamp": "2016-06-13T02:02:54", "yymm": "1504", "arxiv_id": "1504.03383", "language": "en", "url": "https://arxiv.org/abs/1504.03383" }
\section{Introduction} Consider a quantum field theory in the presence of quenched disorder, that is, spatially random couplings. As with simpler spatially uniform couplings, the effects of quenched disorder can be relevant or irrelevant \cite{harris, sachdev}. When relevant, the disorder can drive the quantum theory into various possible nontrivial low energy phases. Of interest to us in this paper will be disordered fixed points. At a disordered fixed point, the low energy physics exhibits an emergent scale invariance. Such fixed points have a different structure to, for example, relativistic conformal field theories because the spatial dependence of the couplings at the fixed point means that momentum is not conserved at long wavelengths. It is not easy to find controlled instances of disordered fixed points where the disordered coupling is stabilized at a finite value \cite{sachdev}. Nonetheless, such critical theories without momentum conservation are very interesting candidates to understand the universal behavior of bad metals \cite{Hartnoll:2014lpa}. It is therefore of interest to have concrete examples at hand. Evidence for a disordered fixed point was recently found in a holographic system \cite{Hartnoll:2014cua}. The disordered fixed point itself is dual to a highly inhomogeneous extremal black hole horizon. In \cite{Hartnoll:2014cua}, a CFT described by a gravity dual was perturbed by marginally relevant disorder. It was shown that the {\it disorder averaged} geometry in the far IR regime exhibited an emergent scaling invariance \begin{equation}\label{eq:IR} \overline{ds^2} \; \stackrel{r\to\infty}{=} \; L^2_\text{IR} \left(- \frac{dt^2}{r^{2z}} + \frac{dr^2 + d\vec x^{\, 2}}{r^2} \right) \,. \end{equation} Here $\overline{ds^2}$ is the disorder averaged metric. This result was obtained both by resummation of perturbation theory and by full-blown numerics. The averaged metric was therefore seen to be characterized by a dynamical critical exponent $z$ that determines the relative scaling of space and time \cite{Kachru:2008yh}. However, it was not completely clear what physical quantities would be determined by this disorder averaged metric. In this paper we will revisit the system considered in \cite{Hartnoll:2014cua}, but now placed at a nonzero temperature. Our main result, that we again obtain both analytically and numerically, is that the entropy density of the system scales with temperature as \begin{equation}\label{eq:avS} s \sim T^{(d-1)/z} \,, \end{equation} with $d$ the number of spacetime dimensions of the quantum field theory and with the same $z$ appearing as in the disorder averaged zero temperature IR metric (\ref{eq:IR}). Namely, in an expansion in small disorder strength $ {\bar V}$, \begin{equation} z = 1 + {\textstyle{1\over2}} \pi^{d/2-1} \Gamma\left({\textstyle \frac{d}{2} } \right) {\bar V}^2 + {\mathcal{O}} \left({\bar V}^4 \right) \,. \end{equation} Thus, we have shown that the averaged metric indeed accurately captures the scaling properties of a disordered fixed point. In fact, we will see that the temperature scaling of the entropy (\ref{eq:avS}) is equal to the temperature scaling of the entropy of the averaged metric, although the coefficients need not agree. The main technical achievements in this paper are the perturbative and numerical construction of the $T>0$ disordered black hole spacetimes. \S\ref{sec:setup} describes the general setup. \S\ref{sec:pert} and \S\ref{sec:therm} obtain the solution perturbatively in disorder and describe a resummation of logarithms. \S\ref{sec:num} constructs the solutions numerically. In \S\ref{sec:diss} we discuss open questions. \section{Setup} \label{sec:setup} In this section we review the holographic description of a CFT perturbed by marginal disorder \cite{Hartnoll:2014cua}. The starting point is a real scalar coupled to gravity in $d+1$ dimensions.\footnote{Throughout our discussion we will let $d$ denote the boundary spacetime dimensions. We will always work in signature $(-++\dotsb+)$. For index conventions, capital Latin indices will denote all bulk directions, lowercase Latin letters, $a,b$ etc.~, will denote boundary spacetime directions, while middle lowercase Latin letters, $i,j$ etc.~, will denote the boundary spatial directions.} The action is: \begin{align} S ={}& \frac{1}{2 \kappa_N^2} \int {\rm d}^{d+1} x \, \sqrt{-g} \left[ R - \Lambda - 2 \nabla_A \Phi \nabla^A \Phi - 4 V(\Phi) \right]\, . \label{eq:action} \end{align} Here $\kappa_N^2 = 8 \pi G_N$ and $\Lambda = - \frac{d(d-1)}{L^2}$ is the usual AdS$_{d+1}$ cosmological constant. The resulting equations of motion are: \begin{align} 0 ={}& \Box \Phi - V'(\Phi)\, , & R_{AB} ={}& 2 \nabla_A \Phi \nabla_B \Phi + \frac{1}{d-1} g_{AB} \left[ 4V(\Phi) + \Lambda \right]\, . \label{eq:einstein} \end{align} For the scalar potential, we take a negative mass squared: \begin{align} V(\Phi) ={}& - \frac{\mu}{2 L^2} \Phi^2\, . \label{eq:vphi} \end{align} The holographic dictionary (e.g. \cite{Hartnoll:2009sz}) tells us that $\Phi$ is dual to an operator ${\cal O}$ with dimension: \begin{align} \Delta ={}& \frac{d}{2} + \sqrt{ \left(\frac{d}{2}\right)^2 - \mu}\, . \label{eq:dimension} \end{align} More explicitly, this is seen by considering the asymptotic behavior of the scalar near the AdS$_{d+1}$ boundary. In the Poincar\'e patch, where the line element approaches: \begin{align} {\rm d} s^2 ={}& \frac{L^2}{r^2} \left( \eta_{ab} {\rm d} x^a {\rm d} x^b + {\rm d} r^2 + \dotsb \right)\, , \label{eq:fgcoordinates} \end{align} as $r \to 0$, the scalar has the following form near the boundary: \begin{align} \Phi(r \to 0) ={}& r^{d-\Delta} \Phi_1(x^a) + r^{\Delta} \Phi_2(x^a) + \dotsb\, . \label{eq:bdyscalar} \end{align} The correspondence then tells us that $\Phi_1$ is identified with the source for ${\cal O}$ while $\Phi_2$ encodes the response \cite{Hartnoll:2009sz}. Our interest in this paper is to consider the effect of a disordered source for ${\cal O}$ at finite temperature $T$. To be explicit, we will work with a short ranged, quenched, Gaussian disorder ensemble, where the ensemble of sources is determined by: \begin{align} \Bar{\Phi_1(x^i)} ={}& 0\, , & \Bar{\Phi_1(x^i) \Phi_1(y^i)} ={}& \bar V^2 \delta^{(d-1)}(x^i - y^i)\, . \label{eq:disorder-dist} \end{align} All other moments of the distribution are then fixed by Wick contraction. Note that, as befits quenched disorder, the random sources only depend on the boundary spatial directions. Our analytic discussion later will involve a resummation of perturbation theory in $\bar V$, whereas the numerics will be exact in $\bar V$. We will focus on the case of `marginal' disorder; that is, we will take the distribution to saturate the Harris criterion, which determines when short-range disorder affects critical phenomena \cite{harris, sachdev}. A simple heuristic way to see this result is to note that since $\Phi$ is dual to an operator of dimension $\Delta$, dimensional analysis tells us that $[\Phi_1] = d - \Delta$ and therefore \eqref{disorder-dist} suggests we assign $\bar V$ a dimension of: \begin{align} 2 [\Phi_1] = 2 [\bar V] + d-1 \qquad \Rightarrow \qquad [\bar V] ={}& \frac{d+1}{2} - \Delta \, . \label{eq:vbar-dim} \end{align} We expect then that the disorder is relevant if $\Delta < \frac{d+1}{2}$, irrelevant for $\Delta > \frac{d+1}{2}$ and marginal for $\Delta = \frac{d+1}{2}$. Requiring $[\bar V] = 0$ fixes the value of $\mu$ in (\ref{eq:dimension}) to be: \begin{align} \mu ={}& \frac{d^2-1}{4}\, . \label{eq:harris} \end{align} To realize the disorder, we will use a spectral representation \cite{Shinozuka1991}, writing the source as: \begin{align} \Phi_1(x^i) ={}& \bar V \sum_{\{n_i\}=1}^{N-1} C_{\{n_i\}} \prod_{i=1}^{d-1} \cos (k_{i,n_i} x^i + \gamma_{n_i})\, . \label{eq:bdy-source} \end{align} Here the $\gamma_{n_i}$ are random phases uniformly distributed over $(0,2\pi)$ while the specifics of the distribution are determined by the constants $C_{\{n_i\}}$ and the selection of $k_{i,n_i}$. To strictly capture the disorder in the thermodynamic limit we must take $N \to \infty$. The disorder average of a quantity $f$ is then given by: \begin{align} \Bar{f} ={}& \lim_{N \to \infty} \int \left[\prod_{i=1}^{d-1}\prod_{n_i=1}^{N-1} \frac{{\rm d} \gamma_{i,n_i}}{2\pi} \right]\, f\, . \end{align} We will consider the simplest case of a short range, Gaussian and isotropic disorder distribution, which corresponds to: \begin{align} C_{\{n_i\}} ={}& C = \left(2 \sqrt{\Delta k}\right)^{d-1}\, , & k_{i,n_i} ={}& n_i \Delta k\, , & \Delta k ={}& \frac{k_0}{N} \, . \label{eq:gauss-dist} \end{align} The wavevectors of the modes making up the disordered source (\ref{eq:bdy-source}) therefore range from $k_0/N$ to $k_0$. These are the IR and UV cutoffs on the disorder distribution, respectively. In principle we could take the spacings $\Delta k$ to depend on the direction of $k$, but for simplicity we take an isotropic distribution. Since we will be working at finite temperature, it is important to keep the various scales in mind. It is useful to consider the two dimensionless parameters: $\kappa_0 = k_0/T$ and $\kappa_{\text{IR}} = \kappa_0/N$. The spectral representation requires we take $N \to \infty$ and physically we want $k_0 \gg T$, but the order of limits is important. Since our aim is to describe a disordered system at small but finite temperature, we should be taking the $N \to \infty$ limit first, and so in what follows we will work with the following hierarchy: \begin{align} \kappa_\text{IR} \ll 1 \ll \kappa_0\, , \qquad \text{i.e.} \qquad \frac{k_0}{N} \ll T \ll k_0 \,. \label{eq:hierarchy} \end{align} \section{Perturbative geometry} \label{sec:pert} In this section we perturbatively construct the spacetime deformed by the disordered boundary source (\ref{eq:gauss-dist}). This involves solving the bulk scalar field wave equation subject to the disordered boundary condition, computing the energy momentum tensor of this scalar field, and then computing the backreaction on the metric. Our analytic discussion will largely focus on the lowest order thermodynamic corrections. These will be logarithmic in temperature, suggesting a natural resummation. We will show later in section \ref{sec:config} that to obtain the entropy as a function of temperature to this lowest order, it is sufficient to obtain the metric that follows from the backreaction produced by the \emph{disorder averaged} scalar stress tensor. This amounts to finding the leading disorder averaged correction to the metric, which we will now do. In the appendix we specialize to the $d=2$ dimensional case and determine the correction to the geometry for generic scalar configurations, without averaging. It will be noted that despite the expressions being rather complicated, no essentially new physics is found from the full, configuration dependent expressions. \subsection{Geometry at ${\cal O}(\bar V^0)$} We will work throughout in the Poincar\'e patch at finite temperature. Therefore in the limit $\bar V \to 0$, the line element reduces to (from here on we set $L=1$): \begin{align} {\rm d} s^2 ={}& \frac{1}{r^2} \left[ - f(r) {\rm d} t^2 + \frac{{\rm d} r^2}{f(r)} + \sum_{i=1}^{d-1} ({\rm d} x^i)^2 \right]\, . \label{eq:back-geo} \end{align} Here $f(r)$ is the emblackening factor $f(r) = 1 - (\frac{r}{r_+})^d$, where $r_+$ is the horizon radius. In terms of the temperature $T$, we have $T = \frac{d}{4 \pi r_+}$. The entropy density of the thermal state in the dual field theory is then given by the familiar Bekenstein-Hawking entropy: \begin{align} s ={}& \frac{1}{4 G_N} \frac{1}{V} \int {\rm d}^{d-1} x^i \sqrt{- \gamma} = \frac{1}{4 G_N} \frac{1}{r_+^{d-1}} = \frac{1}{4 G_N} (4\pi)^{d-1} \left( \frac{T}{d} \right)^{d-1} \sim T^{d-1}\, . \end{align} Here $\gamma$ is the metric induced on the horizon from \eqref{back-geo}. This scaling of the entropy with temperature is an important result to keep in mind as our primary objective in this work is to determine the modification of this scaling relation due to the disorder. \subsection{Scalar solutions at ${\cal O}(\bar V)$} We now turn on the disorder with strength $\bar V$. That is, we introduce bulk scalars whose near-boundary behavior, modulo a factor of $r^{d-\Delta}$, gives the boundary source \eqref{bdy-source}. This scalar solution is determined by the wave equaton in the background \eqref{back-geo}: \begin{align} 0 ={}& r^{d+1} \partial_r \left( r^{-(d-1)} f \partial_r \Phi^{(1)} \right) + r^2 \partial_i^2 \Phi^{(1)} + \mu \Phi^{(1)}\, . \end{align} Since we are using a spectral representation of the source on the boundary, we decompose our bulk scalar into harmonics as well, \begin{align} \Phi^{(1)}(r,x^i) ={}& C \bar V \sum_{\{n_i\}} \phi_{k}(r) \prod_i \cos (k_{i,n_i} x^i + \gamma_i)\, , \label{eq:phi-spec} \end{align} where the $\phi_{k}$ (with $k = |\vec{k}_{n_i}|$) now solve the ODE: \begin{align} 0 ={}& r^{d+1} \partial_r \left( r^{-(d-1)} f \partial_r \phi_{k} \right) - (k^2 r^2 - \mu) \phi_{k}\, . \label{eq:phik-ODE} \end{align} The holographic prescription is to find the linear combination of solutions to this differential equation which behave as in \eqref{bdyscalar} near the boundary and are regular at the horizon $r_+$. That is, due to the various constants that have been factored out, we are to pick the solutions of \eqref{phik-ODE} that are regular at the horizon and behave at small $r$ as $\phi_k(r \to 0) = r^{d-\Delta} + \dotsb$. This differential equation does not have a closed form solution for $d>2$, as the emblackening factor introduces $d$ singular points.\footnote{In the appendix we have the closed form solution for $d=2$.} Fortunately, for reasons that will become clear below, we only need the large $k$ behavior of the scalars. These large $k$ modes will be responsible for the leading IR singular behavior after disorder averaging. In the large $k$ regime we can employ a WKB approximation to find (letting $\kappa = k r_+$ and $\rho = r/r_+$): \begin{align} \phi_\kappa(\rho) ={}& \frac{\rho^{\frac{d-1}{2}}}{f^{1/4}(\rho)} \exp \left[ - \kappa \rho \, {}_2 F_1\left(\tfrac{1}{2}, \tfrac{1}{d}, 1 + \tfrac{1}{d}, \rho^d \right) \right]\, . \label{eq:phiwkb} \end{align} The WKB limit here is $\kappa = k r_+ \to \infty$, or $k/T \to \infty$. These modes are largely insensitive to the presence of the horizon, decaying well before reaching the horizon, whereas the small $\kappa \ll 1$ modes will only weakly vary between the boundary and the horizon. \subsection{Geometry at ${\cal O}(\bar V^2)$} Once the scalars are turned on in the bulk, they source the Einstein equations at order $\bar V^2$. As mentioned above, to start with we will find the geometry induced by the averaged stress tensor. To leading order this is the disorder-averaged finite temperature metric. To that end, we calculate the averaged trace-reversed stress tensor: \begin{align} \kappa_N^2 \Theta_{AB} ={}& 2\Bar{\partial_A \Phi^{(1)} \partial_B \Phi^{(1)}} + \frac{4}{d-1} g_{AB} \Bar{V(\Phi)} = 2\Bar{\partial_A \Phi^{(1)} \partial_B \Phi^{(1)}} - \frac{2\mu}{d-1} g_{AB} \Bar{(\Phi^{(1)})^2}\, . \end{align} The needed averages are simple to calculate using the spectral decomposition \eqref{phi-spec}, and the resulting sources are: \begin{eqnarray} \kappa_N^2 f^{-1} \Theta_{tt} & = & \frac{\mu \bar V^2 C^2}{2^{d-2} (d-1)} \sum_{\{n_i\}} \frac{\phi_k^2}{r^2}\, , \\ \kappa_N^2 f \Theta_{rr} & = & \frac{\bar V^2 C^2}{2^{d-2}} \sum_{\{n_i\}} \left[ f (\phi_k')^2 - \frac{\mu}{d-1} \frac{\phi_k^2}{r^2} \right]\, , \label{eq:theta-1} \\ \kappa_N^2 \Theta_{ii} & = & \frac{\bar V^2 C^2}{2^{d-2}} \sum_{\{n_i\}} \left(r^2 k_{i,n_i}^2 - \frac{\mu}{d-1} \right) \frac{\phi_k^2}{r^2}\, . \label{eq:theta-2} \end{eqnarray} Since we have taken an isotropic distribution, the scalar sources in the spatial direction $\Theta_{ii}$ are equal for all $i$. This will simplify the resulting geometry considerably. With these sources, we search for a perturbative solution for $\bar V \ll 1$ of the form: \begin{align} {\rm d} s^2 ={}& \frac{1}{r^2} \left[ - f(r) \left(1 + \bar V^2 A(r) \right){\rm d} t^2 + \frac{{\rm d} r^2}{f(r)} + \left(1 + \bar V^2 B(r) \right)\sum_{i=1}^{d-1} ({\rm d} x^i)^2 \right]\, . \label{eq:metric-ansatz} \end{align} Plugging this ansatz into Einstein's equations then yields the following system of coupled ODEs: \begin{align} 0 ={}& f^2 A'' + \frac{df(f-3)}{2r} A' + \frac{(d-1)f(rf'-2f)}{2r} B' - 2 \kappa_N^2 \Theta_{tt}\,, \label{eq:efe-tt}\\ 0 ={}& A'' - \left(\frac{1}{r} - \frac{3 f'}{2f}\right) A' + (d-1) \left[B'' - \left( \frac{1}{r} - \frac{f'}{2f} \right)B' \right] + 2\kappa_N^2 \Theta_{rr}\, , \label{eq:efe-rr} \\ 0 ={}& f B'' - \frac{d+ f(d-2)}{r} B' - \frac{f}{r} A' + 2 \kappa_N^2 \Theta_{ii}\, . \label{eq:efe-ii} \end{align} These equations can be re-expressed as a first order constraint equation plus two decoupled second order equations. The second order equations are: \begin{align} 0 ={}& \frac{2 \left[d+(d-2)f \right]^2}{r^{1-d} f^{1/2}} \partial_r \left[ \frac{r^{1-d} f^{3/2}}{d+(d-2)f} \partial_r A \right] + j_A(r)\, , \label{eq:aeq} \\ 0 ={}& 2 r^{d-1} f^{1/2} \partial_r \left[ \frac{f^{1/2}}{r^{d-1}} \partial_r B \right] + j_B(r)\, , \label{eq:beq} \end{align} where the scalar sources have been repackaged into: \begin{align} j_A(r) ={}& - 2 \kappa_N^2 \left\{ [d+ 3(d-2) f] f^{-1} \Theta_{tt} - [d-(d-2)f] \left( f \Theta_{rr} - \sum_i \Theta_{ii} \right) \right\}\, ,\label{eq:jadef} \\ j_B(r) ={}& \frac{2 \kappa_N^2}{d-1} \left[ f^{-1} \Theta_{tt} + f \Theta_{rr} + \sum_i \Theta_{ii} \right] \, .\label{eq:jbdef} \end{align} The decoupled second order equations (\ref{eq:aeq}) and (\ref{eq:beq}) can be solved exactly. It is useful to work with the rescaled coordinate \begin{equation} \rho \equiv \frac{r}{r_+} \,. \end{equation} If we further rescale the sources $r_+^2 j_{A/B} \to j_{A/B}$, then we can write the solution as: \begin{align} A(\rho) ={}& \alpha_1 + \alpha_2 f^{-1/2} [(d-2) f - d] \nonumber \\ &\quad \quad + \frac{1}{d} \int_\rho^1 \frac{{\rm d} \tilde \rho}{\tilde \rho^{d-1}} \left[ \frac{d-(d-2)f(\tilde \rho)}{f^{1/2}(\tilde \rho)} - \frac{d-(d-2) f(\rho)}{f^{1/2}(\rho)} \right] \frac{ f^{1/2}(\tilde \rho) j_A(\tilde \rho)}{[d+(d-2)f(\tilde \rho)]^2} \, ,\label{eq:asol} \\ B(\rho) ={}& \beta_1 + \beta_2 r_+^d \left[ f^{1/2}(\rho) - 1 \right] + \frac{1}{d} \int_\rho^1 \frac{{\rm d} \tilde \rho}{\tilde \rho^{d-1}} \left[ 1 - \frac{f^{1/2}(\rho)}{f^{1/2}(\tilde \rho)} \right] j_B(\tilde \rho)\, . \label{eq:bsol} \end{align} Thus we have reduced the scalar backreaction to two integrals. There are a number of integration constants in \eqref{asol} and \eqref{bsol}. Indeed, the behavior of the metric at the horizon is entirely determined by these constants. Plugging these solutions into \eqref{efe-rr} (or, equivalently, into the first order equation that can be derived from the three equations above) shows that $\beta_2$ is fixed by $\alpha_2$, $\beta_2 \propto \alpha_2$ (we only need to say they are proportional, as they will both be zero shortly). In showing that the remaining equation is satisfied, it is important to verify that the disorder averaged energy momentum tensor is conserved. The constants are fixed by the boundary conditions we impose on the metric, both at the conformal boundary and at the horizon. The physical requirements for the geometry are that it is asymptotically AdS$_{d+1}$ and that it is regular at the horizon. Regularity at the horizon is easily seen to require $\alpha_2=0$ (and hence $\beta_2=0$). At the conformal boundary, we require $A(0) = B(0)$. The actual value of $A(0)$ can be scaled away by redefining coordinates so we will impose the simple condition $A(0) = B(0) = 0$, that is: \begin{align} \alpha_1 ={}& - \frac{1}{d} \int_0^1 \frac{{\rm d} \rho}{\rho^{d-1}} \left[ \frac{d-(d-2)f(\rho)}{f^{1/2}(\rho)} - 2 \right] \frac{ f^{1/2}(\rho) j_A(\rho)}{[d+(d-2)f( \rho)]^2} \, , \label{eq:alpha-sol}\\ \beta_1 ={}& - \frac{1}{d} \int_0^1 \frac{{\rm d} \rho}{\rho^{d-1}} \left[ 1 - \frac{1}{f^{1/2}(\rho)} \right] j_B(\rho)\, . \label{eq:beta-sol} \end{align} Thus we obtain an explicit expression for the metric at order $\bar V^2$. \subsection{Large momentum backreaction} In this section, we will look at the backreaction induced by the large momenta (relative to the temperature) scalar modes, where the WKB solutions \eqref{phiwkb} are valid. In particular, we are interested in the behavior of the metric at the horizon, as this is what determines the entropy. To get the metric at the horizon, we must determine the integration constants $\alpha_1$ and $\beta_1$ (because the integrals in (\ref{eq:asol}) and (\ref{eq:bsol}) vanish at the horizon $\rho = 1$). We will see that these large momentum modes give the leading singular contribution to $\alpha_1$ and $\beta_1$. Since the average stress tensor is a decoupled sum of contributions from each wavevector, we write $j_{A/B} = \sum_{\vec{\kappa}} j_{A/B,\kappa}$. The large momentum contribution to $\alpha_1, \beta_1$ can be readily evaluated in the limit $\kappa \to \infty$, using the WKB solutions \eqref{phiwkb}: \begin{align} \alpha_{1, \kappa} ={}& - \frac{1}{d} \int_0^1 \frac{{\rm d} \rho}{\rho^{d-1}} \left[ \frac{d-(d-2)f(\rho)}{f^{1/2}(\rho)} - 2 \right] \frac{ \rho^{1-d} f^{1/2}(\rho) j_{A,\kappa}(\rho) }{ [d+(d-2)f( \rho)]^2} \nonumber \\ ={}& - \frac{\kappa^{1-d} \Delta \kappa \Gamma(d+1)}{2^d d(d-1)}\, + {\cal O}(\kappa^{1-2d}) \, , \\ \beta_{1,\kappa} ={}& - \frac{1}{d} \int_0^1 \frac{{\rm d} \rho}{\rho^{d-1}} \left[ 1 - \frac{1}{f^{1/2}(\rho)} \right] j_{B,\kappa}(\rho) = {\mathcal{O}} \left(\kappa^{-d} \right)\, . \label{eq:largekbeta} \end{align} The integrals are performed by noting that at large $\kappa$, the small $\rho$ (near boundary) region dominates (specifically, $\rho \sim 1/\kappa$). This means that the WKB solution (\ref{eq:phiwkb}) goes like $e^{- \kappa \rho}$, while the remaining terms in the integrand can be expanded about $\rho = 0$. Recall that $\kappa \equiv k r_+$, so that the WKB limit is $k/T \to \infty$. We can sum the above large $\kappa$ terms up to find: \begin{align} \sum_{\kappa \gg 1}^{\kappa_0} \alpha_{1,\kappa} & \simeq{} - \int_{\kappa \gg 1}^{\kappa_0} {\rm d}^{d-1} \kappa \, \frac{\kappa^{1-d} \Gamma(d+1)}{2^d d(d-1)} = - \int^{\kappa_0} {\rm d} \kappa\, \frac{\pi^{\frac{d}{2}-1} \Gamma\left( \tfrac{d}{2} \right)}{2 \kappa} \nonumber \\ &\simeq - \pi^{\frac{d}{2}-1} \Gamma\left( \tfrac{d}{2} \right) \log \kappa_0\, . \label{eq:log} \end{align} In the last step we have picked out the singular contribution due to the upper endpoint $\kappa_0 \gg 1$ of the integral. The large $\k$ contributions to $\beta_1$ in (\ref{eq:largekbeta}) are smaller by a power of $\k$ than the large $\k$ contributions to $\alpha_1$. There is no singular contribution in that case. Since $\alpha_1$ and $\beta_1$ will also generically receive non-zero contributions from all momenta, we write: \begin{align} \alpha_1 ={}& \eta_1 - \pi^{\frac{d}{2}-1} \Gamma \left( \tfrac{d}{2} \right) \log \kappa_0\, , & \beta_1 ={}& \eta_2\, . \end{align} If $\eta_1$, $\eta_2$ have no singular dependence on $r_+$ in the range $\kappa_0/N \ll 1 \ll \kappa_0$, then this information is all we need to determine the low temperature scaling of the entropy density. But this last statement is indeed true. It may be verified by numerically evaluating all the necessary integrals. Alternatively, we can note physically that the only place that such dependence could arise is from the IR cutoff $\kappa_0/N$; however the modes near the IR cutoff, in the limit $N \to \infty$, are essentially constant between the boundary and the horizon and so their contribution will be that of a $\kappa=0$ mode, which won't introduce any singular $r_+$ dependence. The logarithmic divergence in (\ref{eq:log}) has essentially the same origin as the zero temperature logarithm found in \cite{Hartnoll:2014cua}, as well as the logarithms arising in earlier works \cite{Adams:2011rj, Adams:2012yi}. \section{Thermodynamics} \label{sec:therm} We can easily obtain the thermodynamic properties of the averaged geometries constructed in the previous section. First we recall that the temperature of the horizon is determined by the surface gravity, $\hat \kappa$, which now receives contributions from $\alpha_1$: \begin{align} T ={}& \frac{\hat \kappa}{2\pi} = \frac{d}{4 \pi r_+} + \frac{1}{2} f'(r_+) A(r_+) \bar V^2 + {\cal O}(\bar V^4) \\ ={}& \frac{d}{4 \pi r_+} \left[ 1 + \frac{1}{2} \eta_1 - \frac{1}{2} \pi^{\frac{d}{2}-1} \Gamma \left( \tfrac{d}{2} \right) \bar V^2 \log (k_0 r_+) \right] + {\cal O}(\bar V^4)\, . \end{align} If, in the spirit of \cite{Hartnoll:2014cua}, we throw caution to the wind and exponentiate the logarithm, then to this order we can write: \begin{align} T \sim r_+^{- z}\, , \end{align} where \begin{align} z ={}& 1 + \frac{1}{2} \pi^{\frac{d}{2}-1} \Gamma \left( \tfrac{d}{2} \right) \bar V^2 + {\cal O}(\bar V^4)\, .\label{eq:zfinal} \end{align} This is precisely the dynamical critical exponent identified at $T =0$ in \cite{Hartnoll:2014cua}. The constant $\eta_1$ has gone into the prefactor in this scaling relation. Now that we know how the temperature scales with the horizon, we can determine the entropy density scaling: \begin{align} s ={}& \frac{1}{4 G_N} \frac{1}{V} \int\limits_{r=r_+} {\rm d}^{d-1} x \, \sqrt{\gamma} = \frac{1}{4 G_N} \frac{r_+^{- (d -1)}}{V} \int {\rm d}^{d-1} x \left[ 1 + \frac{d-1}{2} \bar V^2 B(r_+) + {\cal O}(\bar V^4) \right] \, . \end{align} Since $B(r_+)$ is simply an $r_+$-independent constant at low temperatures, we see that this entropy scales as: \begin{align} s \sim r_+^{-(d-1)} \sim T^{\frac{d-1}{z}}\, . \label{eq:s1} \end{align} This scaling relation is the first incarnation of our primary result. In particular, we see that the exponent $z$ is a true critical exponent and that the disorder has indeed affected thermodynamic properties. The result (\ref{eq:s1}) relates the entropy and temperature of the averaged metric. In the following subsection we show that this relation also holds for the true entropy as a function of temperature. \subsection{Configuration dependence} \label{sec:config} In this section, we discuss the entropy of a typical, configuration dependent metric. Without the enhanced symmetry of the averaged configuration to simplify matters, the line element will in general look like: \begin{align} {\rm d} s^2 ={}& \frac{1}{r^2} \left[- f(r) \left( 1 + \bar V^2 A(x^i, r) \right) {\rm d} t^2 + \frac{{\rm d} r^2}{f(r)} + \sum_{ij} \left(1 + \bar V^2 B_{ij}(x^i, r) \right) {\rm d} x^i {\rm d} x^j \right]\, . \end{align} We cannot solve analytically for the metric functions $A, B_{ij}$ in general. However, the entropy depends only on the induced metric on the horizon. To leading order: \begin{align} s ={}& \frac{1}{4 G_N} \frac{1}{V} \int\limits_{r=r_+} {\rm d}^{d-1} x \sqrt{\gamma} = \frac{1}{4 G_N} \frac{r_+^{-(d-1)}}{V} \int {\rm d}^{d-1} x\sqrt{ \det (1 + \bar V^2 B_{ij}(r_+))} \nonumber \\ ={}& \frac{1}{4 G_N} \frac{r_+^{-(d-1)}}{V} \int {\rm d}^{d-1} x \left[ 1 + \frac{1}{2} \bar V^2 \sum_i B_{ii}(r_+) \right] = \frac{1}{4 G_N} \frac{1}{ r_+^{d-1}} \left[ 1 + \frac{1}{2} \bar V^2 \sum_i \overline{B_{ii}}(r_+) \right]\, . \end{align} Here we have used $\det (1 + \epsilon A) = 1 + \epsilon {\rm tr} A + {\cal O}(\epsilon^2)$ and the fact that the metric components, being given by linear combinations of sines and cosines, are self averaging. This result tells us that, to the order at which we are working, the entropy density only depends on the averaged spatial metric, and we can use the result of the previous section to conclude that $s \sim r_+^{- (d-1)}$, just as before. Therefore to determine if the entropy of the averaged metric is distinct from the entropy of a typical configuration dependent metric all we need to do is find the surface gravity. The surface gravity is easily worked out to lowest order in full generality: \begin{align} \hat \kappa^2 = & {} \frac{[f'(r_+)]^2}{4} \left[1 + \bar V^2 A(r_+,x^i) + {\cal O}(\bar V^4) \right] \nonumber \\ & \Rightarrow \hat \kappa = \frac{\lvert f'(r_+) \rvert}{2} \left[ 1 + \frac{1}{2} \bar V^2 A(r_+,x^i) + {\cal O}( \bar V^4) \right]\, . \end{align} It is a theorem (for metrics of the form we are considering) that the surface gravity must be constant along the horizon. Therefore we can replace $A(r_+,x^i)$ in the previous equation by its average: \begin{align} \hat \kappa ={}& \frac{\lvert f'(r_+) \rvert}{2} \left[ 1 + \bar V^2 \overline{A(r_+)} + {\cal O}(\bar V^4) \right]\, . \end{align} More explicitly, averaging $\hat \kappa$ over the horizon is trivial since it is a constant, whereas averaging $A(r_+,x^i)$ over space is the same as averaging over the disorder ensemble. Using our results for the average metric to we can deduce: \begin{align} \hat \kappa \sim{}& T \sim \frac{\lvert f'(r_+) \rvert}{2} \left[ 1 - \frac{1}{2} \pi^{\frac{d}{2} - 1} \Gamma\left( \tfrac{d}{2} \right) \bar V^2 \left(\log k_0 r_+ + \text{const.} \right) \right] \sim r_+^{-z} \, , \end{align} where $z$ is again the exponent identified above and in \cite{Hartnoll:2014cua}. The temperature and entropy scalings, $r_+ \sim T^{-1/z}$ and $s \sim r_+^{- (d-1)}$, combine to give: \begin{align}\label{eq:sfinal} s \sim r_+^{-(d-1)} \sim T^{(d-1)/z}\, , \end{align} now as a result for the actual entropy as a function of temperature. In the remainder of the paper we will verify this result with full blown numerics, beyond the perturbative regime. \section{Numerics} \label{sec:num} In order to construct the fully backreacted black hole solution, at finite disorder strength $\bar V$, we use the DeTurck trick \cite{Headrick:2009pv,Figueras:2011va}. The method works in a general number of dimensions, and we shall use it for $d=2,\,3$ (boundary spacetime dimensions). We will detail the $d=3$ construction, since it is more involved and has not yet appeared in the literature.\footnote{The DeTurk method was used to construct disordered spacetimes in \cite{Hartnoll:2014cua} and \cite{Donos:2014yya}. Other numerical studies of strong disorder in holography have been in the probe limit \cite{Arean:2013mta, Zeng:2013yoa, Arean:2014oaa}.} The black hole solution we search for is static, which means we can introduce an adapted coordinate system in which $\partial_t$ is a Killing direction. Furthermore, the line element should be invariant under the discrete transformation $t\to-t$. The most general line element and scalar field compatible with these symmetries can be written \begin{multline} \mathrm{d}s^2 = \frac{1}{y^2}\Bigg[-(1-y^3)A\,y_+^2\,\mathrm{d}t^2+\frac{B}{1-y^3}\mathrm{d}y^2+\\ y_+^2 S_1\Big(\mathrm{d}x_1+F_1 \mathrm{d}y+F_2 \mathrm{d}x_2\Big)^2+y_+^2\,S_2\Big(\mathrm{d}x_2+F_3 \mathrm{d}y\Big)^2\Bigg]\,, \label{eq:metric3D} \end{multline} \begin{equation} \Phi =\frac{y}{y_+}\,\widetilde{\Phi}\,, \label{eq:phie} \end{equation} where $A,\,B,\,S_1,\,S_2,\,F_1,\,F_2,\,F_3$ and $\widetilde{\Phi}$ comprise a total of $8$ functions that depend on $y$, $x_1$ and $x_2$. The first step in using the DeTurck method is to choose a reference metric $\bar{g}$. This reference metric should have our desired boundary conditions (\emph{i.e.} contains a regular horizon and has the correct asymptotics). For the reference metric, we choose the planar Schwarzschild black hole, which can be obtained from the line element (\ref{eq:metric3D}) by setting $A=B=S_1=S_2=1$ and $F_1=F_2=F_3=0$\,. The reference metric does not depend on the boundary spatial coordinates $x_1$ and $x_2$, and so is automatically periodic with respect to these. Finally, $y_+$ is a parameter which controls the black hole temperature: $4\pi T= 3\,y_+$. The second step in the DeTurck method consists of solving the following set of equations \begin{align} G_{AB}\equiv R_{AB} -\nabla_{(A}\xi_{B)}-& 2 \nabla_A \Phi \nabla_B \Phi -\frac{1}{d-1} g_{AB} \left[ 4V(\Phi)+\Lambda\right]=0\, . \nonumber \\ \Box \Phi - V'(\Phi) = 0\, , \label{eq:dennis} \end{align} where $\xi^M = g^{PQ}[\Gamma^M_{PQ}(g)-\bar{\Gamma}^M_{PQ}(\bar{g})]$ and $\bar{\Gamma}(\bar{g})$ is the Levi-Civita connection associated with the reference metric $\bar{g}$. Furthermore, $V(\Phi)$ is given by (\ref{eq:vphi}) with a mass saturating the Harris criterion (\ref{eq:harris}). The advantage of this method is that the above Eqs.~(\ref{eq:dennis}) form a set of elliptic PDEs \cite{Headrick:2009pv} for the metric ansatz (\ref{eq:metric3D}), unlike the Eqs.~(\ref{eq:einstein}). For solutions of Eq.~(\ref{eq:dennis}) to also be solutions of Eq.~(\ref{eq:einstein}), we must have $\xi^M=0$. In some cases (such as vacuum Einstein), there is a proof that all solutions to Eq.~(\ref{eq:dennis}) also have $\xi^M=0$ \cite{Figueras:2011va}. In our case, we lack such a proof, so we must verify that $\xi^M=0$ after solving the equations. The local uniqueness property of elliptic equations guarantees that solutions with $\xi^M\neq0$ cannot be arbitrarily close to those with $\xi^M=0$. In order to complete the system of partial differential equations, suitable boundary conditions must be imposed. In addition, these must be consistent with zero DeTurck vector $\xi^M$. At the boundary, located at $y=0$, we demand $A(0,x_1,x_2)=B(0,x_1,x_2)=S_1(0,x_1,x_2)=S_2(0,x_1,x_2)=1$ and $F_1(0,x_1,x_2)=F_2(0,x_1,x_2)=F_3(0,x_1,x_2)=0$. Furthermore, we demand $\tilde{\Phi}(0,x_1,x_2)=\Phi_1(x_1,x_2)$, with the scalar source $\Phi_1(x_1,x_2)$ defined in Eq.~(\ref{eq:bdy-source}). The reader might be surprised with the extra factor in Eq.~(\ref{eq:phie}) dependent on $y_+$. However, we note that, asymptotically, the relation between the Fefferman-Graham coordinate $r$ defined in Eq.~(\ref{eq:fgcoordinates}) and $y$ reads $y = y_+\, r+\mathcal{O}(r^2)$. At the horizon, $y=1$, the Einstein-DeTurck equations demand $A(1,x_1,x_2)=B(1,x_1,x_2)$, which is equivalent to having a well defined bifurcating Killing horizon, with our choice of reference metric. The boundary conditions at the horizon for the remaining variables follow from expanding the equations in a power series off the horizon - they all turn out to be of the Robin type. For the $x_1$ and $x_2$ directions, we demand periodic boundary conditions. Now we are in a position to solve the PDE system (\ref{eq:dennis}) subject to the above mentioned boundary conditions. To solve the equations, we use a standard damped Newton-Raphson iteration algorithm based on pseudo-spectral collocation on a Chebyshev (in $y$) and Fourier grids (in $x_1$ and $x_2$). In $d=2$, there are additional subtleties associated with the boundary behaviour of the scalar field $\Phi$, which introduces several non-analytic behaviours in the metric and scalar field functions. To deal with these, we patch a grid of finite differences onto the Chebyshev collocation grid parametrising the holographic radial direction $y$ \cite{Hartnoll:2014cua}. The space of solutions is 4-dimensional, depending on $\bar{V}$, $k_0$, $N$ and $T$. However, since our underlying UV microscopic theory is conformally invariant, we only need to worry about dimensionless ratios of these quantities. In order to access the true IR physics, we need to preserve the hierarchy presented in Eq.~(\ref{eq:hierarchy}). That is, we must make sure the temperature range we probe is in between the short and long distance cutoffs on the disorder distribution. In the Schwarzschild background with $\bar V = 0$, the temperature is given by $T = d/(4 \pi r_+)$, and it is $r_+$ rather than $1/T$ that sets the scale that should be compared to cutoffs. This is helpful because it pushes the IR cutoff down to lower temperatures (by a factor of $d/(4 \pi)$) than the rough window (\ref{eq:hierarchy}) would suggest.\footnote{Having the IR cutoff on the disorder be behind the horizon also resolves the following technical issue that arises at $T=0$. While the disorder is marginally relevant, the homogeneous mode of the scalar is strongly relevant. In the energy range (\ref{eq:hierarchy}) the growth of the homogeneous mode is subdominant to the disorder physics due to the presence of many higher harmonics. However, below the IR cutoff on the disorder, the homogeneous mode will eventually dominate and drive a flow away from AdS. This is an artifact of needing to work with an IR cutoff, and can complicate zero temperature numerics, but not the numerics herein.} In all computations detailed in this section we measure all quantities in units of $k_0$ (which effectively sets $k_0=1$) and either we take $N = 50$ in $d=2$, or $N=5$ in $d=3$. $T$ is then allowed to vary freely in the required range (\ref{eq:hierarchy}). Note that for these values of $N$, the Fourier grids must be very dense, having at least $500$ points in the periodic direction in $d=2$ and $50$ in $d=3$.\footnote{The choice of number of points in each periodic direction is such that we should be able to resolve up to the fifth harmonic of the highest wave number appearing in our scalar field potential (\ref{eq:bdy-source}). Since each harmonic descendent decays exponentially \cite{Horowitz:2012ky} in multiples of the relevant wavenumber, we expect our resolution to capture all the relevant physics.} Typical profiles for the boundary source in $d=2$ and $d=3$ are depicted in Fig.~(\ref{figs:0}). \begin{figure}[h] \centering \subfigure{\label{fig:0a} (a) \includegraphics[height = 0.26\textheight]{phi_1_1D.pdf}} \subfigure{\label{fig:0b} (b) \includegraphics[height = 0.26\textheight]{phi_1_2D.jpg}} \caption{\label{figs:0} {\bf Disordered sources:} Plot (a) shows a scalar source $\Phi_1$ as a function of $x_1\,k_0$ at the boundary. Plot (b) is a density plot of $\Phi_1$, now in $d=3$, as a function of boundary directions $x_1\,k_0$ and $x_2\,k_0$. In both cases we have chosen $\bar{V}=0.1$. The characteristic width of the peaks in these plots is determined by the short distance cutoff, $\Delta x |_\text{peak} \sim \pi/k_0$.} \end{figure} Having these solutions at hand, there are several quantities we can monitor. We decided to focus on the entropy, since it is a direct probe of the infrared geometry. We will discuss the results for $d=2$ and $d=3$ separately, starting with $d=2$. In Fig.~(\ref{fig:1a}) we show a plot of the logarithmic derivative of the entropy, as a function of the black hole temperature, for several values of the disorder amplitude $\bar{V}$. The data is represented by disks, and the solid lines indicate the analytic prediction of Eq.~(\ref{eq:sfinal}), namely $s \propto T^{(d-1)/z}$ with $z$ given by (\ref{eq:zfinal}). From top to bottom, we have $\bar{V}=0.1,0.2,\ldots,1.0$. The agreement between our perturbative analytic prediction and the numerics is striking. This numerical scaling result is compelling evidence for the emergence of a disordered fixed point at $T=0$, characterized by a dynamical critical exponent $z$. In $d=3$ the calculations are more involved, since we have to generate many solutions to the 3D PDE system we described above. This means we do not have as much data as for the $d=2$ case. In particular, we have focussed on a single value $\bar{V}=0.1$. We also have a narrower window of temperatures in which to access the IR scaling regime (\ref{eq:hierarchy}) because $N$ is smaller. In Fig.~\ref{fig:1b} we plot the logarithmic derivative of the entropy, as a function of the black hole temperature for $\bar{V}=0.1$. The disks represent the data, and the solid line indicates the analytic prediction $s \propto T^{(d-1)/z}$ of (\ref{eq:sfinal}), with $z$ again given by (\ref{eq:zfinal}). Again, the agreement as the temperature is lowered is rather encouraging. \begin{figure}[h] \centering \subfigure{\label{fig:1a} (a) \includegraphics[height = 0.26\textheight]{scalings_3D.pdf}} \subfigure{\label{fig:1b} (b) \includegraphics[height = 0.26\textheight]{scalings_4D.pdf}} \caption{\label{fig:1} {\bf Emergence of an IR dynamical scaling exponent}. Plot (a) shows the logarithmic derivative of the entropy for several values of $\bar{V}$ in $d=2$. These plots have $N=50$. From top to bottom, we have $\bar{V}=0.1,0.2,\ldots,1.0$. Plot (b) shows the logarithmic derivative of the entropy for $\bar{V}=0.1$ in $d=3$. This plot has $N=5$.} \end{figure} The computations of the entropy we have discussed are for a given realization of disorder. This is the entropy we have been after. With the numerical data at hand, we can compare this (physical) entropy with the entropy of the averaged metric, as discussed in previous sections. The averaged metric is easily obtained from the numerics by integrating over $x$ and $y$ (see the more extended discussion in \cite{Hartnoll:2014cua}). To compare the averaged entropy with the entropy of the averaged metric, we did the following: we computed the entropy of the average metric and the entropy of the full metric. We subtracted one from the other and found a maximum disagreement around $1\%$. We then did a similar subtraction, but this time for the dynamical critical exponent measured with both entropies, and we found a maximum disagreement of $10^{-4}\%$, which is well within the error in our integration scheme in $d=3$. We take this as strong evidence that the dynamical critical exponent yields the same value whether measured with the entropy of the average or full metric, as we have argued in the previous section. This result also substantiates the claim in \cite{Hartnoll:2014cua} that the averaged metric is a useful bulk quantity to identify the scaling properties of the IR fixed point. The reader might also be interested in the difference between the sources shown in Figs.~(\ref{figs:0}) and the scalar field evaluated at the horizon, $\Phi_\mathcal{H}$. For completeness, we show the latter in Figs.~\ref{figs:3}. We see that some of the structure of the UV clearly survives in the IR. At first sight, the disorder appears to have been smoothened out on the horizon relative to the sources shown in Fig.~(\ref{figs:0}). However, this is simply the fact that upon renormalizing down to the horizon, structure on scales smaller than the temperature scale have been integrated out. This illustrates the need to keep the long distance cutoff on the disorder distribution sufficiently large in order to access the correct disorder physics in the IR geometry. \begin{figure}[h] \centering \subfigure{\label{fig:3a} (a) \includegraphics[height = 0.26\textheight]{phi_H_1D.pdf}} \subfigure{\label{fig:3b} (b) \includegraphics[height = 0.26\textheight]{phi_H_2D.jpg}} \caption{\label{figs:3} {\bf Disordered horizons}. Plot (a) shows the scalar field $\Phi_\mathcal{H}$ evaluated at the horizon as a function of $x_1\,k_0$. Plot (b) is a density plot of $\Phi_\mathcal{H}$, now in $d=3$, as a function of boundary directions $x_1\,k_0$ and $x_2\,k_0$. The sources for these solutions are those shown in Fig.~\ref{figs:0}. Plot (a) is at temperature $T/k_0 = 0.0478$ while plot (b) has $T/k_0=0.0798$. The characteristic width of the peaks in these plots is now determined by the temperature scale, so that $\Delta x |_\text{peak} \sim \pi r_+$. This is the expected statement that the temperature serves as the short distance cutoff on the disorder distribution at the horizon.} \end{figure} Finally, in an attempt to characterize the geometry of the horizon, we plot in Fig.~(\ref{fig:4}) the Ricci scalar, ${}^{(2)}R$, of the induced metric on a spatial cross section of the horizon - {\bf a disordered horizon}. \begin{figure}[h] \centering \includegraphics[height = 0.3\textheight]{ricci_2D.jpg} \caption{\label{fig:4} {\bf Disordered horizon - Ricci scalar}. Plot of the Ricci scalar of the induced metric on a spatial cross section of the horizon. The parameters used are the same as in Fig.~\ref{figs:3}. Because the metric depends on the square of the scalar field, the metric functions oscillate twice as quickly and hence the structures appear half the size of those in Fig.~\ref{figs:3}.} \end{figure} \section{Discussion} \label{sec:diss} In this paper we have presented evidence for the existence of a disordered fixed point in the far IR of a spacetime with a marginally relevant disordered boundary source. In addition to constructing numerical disordered black hole spacetimes, we showed that the dynamical critical exponent $z$ of the IR fixed point could be obtained by resumming logarithmic divergences that appear in perturbation theory in the strength of the disorder. However, this perturbation theory is an expansion about the UV spacetime. The whole point of fixed points is that they are self-contained and well-defined without reference to a UV completion. Therefore, an intrinsic description of the disordered horizon (at $T=0$) as a solution to Einstein's equations should exist. Characterizing the disordered horizon on its own, IR, terms could potentially lead to a greatly expanded understanding of what extremal horizons can look like. It would, presumably, explain why a na\"ive resummation of logarithmic divergences at low orders in perturbation theory appears to give the correct answer for the dynamical critical exponent. It would also clarify exactly which quantities can be accurately determined from the corresponding disorder averaged spacetime. Given the construction of the background geometries, it is very natural to study correlation functions in these backgrounds. If the full frequency and momentum dependence can be found, then this should verify the value of $z$ that we have found, giving correlators that are scaling functions of $\omega/k^z$. The study of transport, in particular, in these backgrounds was one of the motivations to construct these solutions in the first place. The disordered fixed point does not conserve momentum and will therefore have intrinsically finite transport properties \cite{Hartnoll:2014lpa}. This is a qualitatively distinct regime from the case in which disorder can be described as an irrelevant perturbation about a clean IR fixed point \cite{Hartnoll:2007ih, Hartnoll:2008hs, Hartnoll:2012rj, Lucas:2014sba, Lucas:2015vna}. Finally, having developed the numerical and analytical methods needed to understand disordered spacetimes, we may soon be in a position to tackle the more difficult case of relevant (rather than marginally relevant) disorder. \section*{Acknowledgements} We are grateful to Aristomenis Donos and Veronica Hubeny for helpful comments. SAH is partially supported by a DOE Early Career Award, a Sloan fellowship and the Templeton foundation. This work was undertaken on the COSMOS Shared Memory system at DAMTP, University of Cambridge operated on behalf of the STFC DiRAC HPC Facility. This equipment is funded by BIS National E-infrastructure capital grant ST/J005673/1 and STFC grants ST/H008586/1, ST/K00333X/1.
{ "timestamp": "2015-04-15T02:00:29", "yymm": "1504", "arxiv_id": "1504.03324", "language": "en", "url": "https://arxiv.org/abs/1504.03324" }
\section{Introduction} \label{s:intro} Over the last $\sim$\,half a dozen years, deep colour-magitude diagrams (CMDs) from images taken with the Advanced Camera for Surveys (ACS) and the Wide Field Camera 3 (WFC3) aboard the Hubble Space Telescope (HST) revealed that several intermediate-age ($\sim$\,1\,--\,2 Gyr old) star clusters in the Magellanic Clouds host extended main sequence turn-off (MSTO) regions \citep{mack+08a,glat+08,milo+09,goud+09,goud+11b,goud+14,corr+14}, in some cases accompanied by composite red clumps \citep{gira+09,rube+11}. A popular interpretation of these extended MSTOs (hereafter eMSTOs) is that they are due to stars that formed at different times within the parent cluster, with age spreads of 150\,--\,500 Myr \citep{milo+09,gira+09,rube+10,rube+11,goud+11b,kell+12,corr+14,goud+14}. Alternative potential causes to explain the eMSTO phenomenon presented in the recent literature include spreads in rotation velocity among turn-off stars \citep[][but see Girardi et al. 2011]{basdem09,yang+13}, a photometric feature of interacting binary stars \citep{yang+11} or a combination of both \citep{li+12}. An important aspect of the nature of the eMSTO phenomenon among intermediate-age star clusters is that it is not shared by all such clusters \citep[e.g][]{milo+09,goud+11a,corr+14}. In this context, \citet[][hereafter G11a]{goud+11a} suggested that the key factor in determinining whether or not a cluster features an eMSTO is its ability to retain material ejected by first-generation stars that feature relatively slow stellar outflows (the so-called ``polluters''). Following the arguments presented in G11a, eMSTOs can only be hosted by clusters for which the escape velocity was higher than the wind velocity of such polluter stars at the time such stars were present in the cluster. Currently, the most popular candidates for first-generation ``polluters'' are {\it (i)\/} intermediate-mass AGB stars ($4 \la {\cal{M}}/M_{\odot} \la 8$, hereafter IM-AGB; e.g., \citealt{danven07}, and references therein), {\it (ii)\/} rapidly rotating massive stars (often referred to as ``FRMS''; e.g. \citealt{decr+07}) and {\it (iii)\/} massive binary stars \citep{demink+09}. For what concerns instead the formation scenario, the currently two favored ones predict that the stars have been formed from or polluted by gas that is a mixture of pristine material and material shed by such ``polluters''. In particular, in the ``in situ star formation'' scenario \citep[see e.g.,][]{derc+08,derc+10,conspe11}, subsequent generations of stars are formed out of gas clouds that were polluted by winds of first generation stars to varying extents, during a period spanning up to a few hundreds of Myr, depending on the nature of the ``polluters''. Conversely, in the alternative ``early-disc accretion'' scenario \citep{bast+13}, the chemically enriched material, ejected from interacting high-mass binary systems or FRMS stars, is accreted onto the circumstellar discs of low-mass pre-main sequence (PMS) stars that were formed in the \emph{same generation} during the first $\simeq$ 20 Myr after the formation of the star cluster. Two recent studies provided support to the predictions of the ``in situ'' scenario. \citet[][hereafter G14]{goud+14} studied in detail HST photometry of a sample of 18 massive intermediate-age star clusters in the Magellanic Clouds that covered a variety of masses, ages, and radii. G14 found that all the star clusters in their sample host eMSTOs, featuring age spreads that correlate with the clusters' escape velocity $V_{\rm esc}$, both currently and at an age of 10 Myr. Furthermore, \citet[][hereafter C14]{corr+14} studied 4 low-mass ($\approx 10^4$ M$_{\odot}$) intermediate-age star clusters in the Large Magellanic Cloud (LMC) and found that the two clusters that host eMSTOs have $V_{\rm esc} \ga$ 15 \kms\ out to ages of $\sim$ 100 Myr, whereas $V_{\rm esc} \la$ 12 \kms\ at all ages for the two clusters that do \emph{not} exhibit eMSTOs. These results suggest that the ``critical'' escape velocity for a star cluster to able to retain the material ejected by the slow winds of the first generation polluters seems to be in the approximate range of 12\,--\,15 \kms. Interestingly, these escape velocities are consistent with wind velocities of IM-AGB stars which are in the range 12\,--\,18 \kms\ \citep{vaswoo93,zijl+96} and with the low end of observed velocities of ejecta of the massive binary star RY Scuti \citep[15\,--\,50 \kms,][]{smit+02,smit+07}. However, in the context of the multiple population scenario, one piece of the puzzle that is still missing is the identification of an age spread (i.e., second-generation stars) in young massive star clusters. In fact, one of the prediction of the ``in situ'' scenario is that eMSTO should be observed also in young star clusters if they have the adequate properties in terms of mass and escape velocity. Young star clusters must be massive enough (i.e., M $\ga 10^5$ M$_{\odot}$) in order to have deep enough potentials and high enough escape velocities to retain material lost due to stellar evolution of the first generation stars and to be able to accrete new material from the pristine gas still present in the surroundings. Unfortunately, these strict constraints render young star clusters in which the presence of the eMSTO phenomenon is expected and verifiable very scarce. G11a identified the relatively young ($\sim$ 300 Myr) massive star cluster NGC~1856 in the LMC as a promising candidate. Following this suggestion, \citet{bassil13} analyzed archival Wide Field \& Planetary Camera 2 (WFPC2) images of this cluster, finding no evidence for age spreads larger than $\sim$ 35 Myr, which was interpreted by the authors as a suggestion that the eMSTO feature in intermediate-age clusters cannot be due to age spreads. However, the data analyzed by \citet{bassil13} seem to have reached only $\simeq$\,2 mag beyond the MSTO, whose shape is vertical for the filters used in their study (F450W and F555W). This is not deep enough to sample a significant portion of the MS of the cluster, especially its curvature at lower stellar masses, which is important to constrain several important parameters of the cluster such as age, metallicity, distance, and reddening. In principle, this could have prevented the identification of features that might point to the presence of second-generation stars. With this in mind, we present an analysis of new HST WFC3 photometry of NGC~1856. These data allowed us to sample the cluster population down to $\simeq$\,8 mag beyond the MSTO, obtaining deep CMDs that permit a thorough examination of the MSTO morphology and the nature of the cluster. We compare the observed CMD with Monte Carlo simulations in order to quantify whether it can be reproduced by one simple stellar population (SSP) or whether it is best reproduced by a range of ages. We also study the evolution of the cluster mass and escape velocity from an age of 10 Myr to its current age, to verify whether the cluster has the right properties to form and retain a second generation of stars. This analysis allows us to reveal new findings on the star formation history of the cluster NGC~1856 in the context of the multiple population scenario. The remainder of the paper is organized as follows: Section~\ref{s:obs} presents the observations and data reduction. In Section~\ref{s:cmd} we present the observed CMD, we describe the technique applied to correct the CMD due to the presence of differential reddening effects and we derive the best-fit isochrones. In Section~\ref{s:mcsim} we compare the observed CMD with Monte Carlo simulations, both using one SSP and a combination of two SSPs with different ages. We derive pseudo-age distributions for the observed and simulated CMDs and compare them. In Section~\ref{s:excess} we investigate the presence of ongoing star formation in the cluster, looking for PMS stars of the second generation. In Section~\ref{s:rotation} we test the prediction of the stellar rotation scenario, while Section~\ref{s:dynamics} presents the physical and dynamical properties of the cluster, deriving the evolution of the cluster escape velocity as a function of age. Finally, in Section~\ref{s:conclusion}, we present and discuss our main results. \section{Observations and Data Reduction} \label{s:obs} \begin{figure} \includegraphics[width=1\columnwidth]{fig1.eps} \caption{Completeness fraction as function of magnitude F438W and distance from the cluster center. The open circles and solid lines represent data within the core radius $r_c$, based on the \citet{king62} model fit derived as described in Section~\ref{s:kingmodel}; open squares and dashed line represent data in the range between $r_c$ and 2 $\times r_c$ and the open triangles and dotted lines represent data outside 2 $\times r_c$.} \label{f:completeness} \end{figure} NGC~1856 was observed with HST on 2013 November 12 using the UVIS channel of the WFC3 as part of the HST program 13011 (PI: T. H.\ Puzia). The cluster was centered on one of the two CCD chips of the WFC3 UVIS camera, so that the observations cover enough radial extent to study variations within cluster radius and to avoid the loss of the central region of the cluster due to the CCD chip gap. The cluster was observed in four filters, namely F438W, F555W, F658N, and F814W. Two long exposures were taken in each of the four filters: their exposure times were 430 s (F438W), 350 s (F555W), 450 s (F814W) and 940 s (F658N). In addition, we took three short exposures in the F438W, F656N and F814W filters (185 s, 735 s and 51 s, respectively), to avoid saturation of the brightest RGB and AGB stars. The two long exposures in each filter were spatially offset from each other by 2\farcs401 in a direction +85\fdg76 with respect to the positive X-axis of the detector, in order to move across the gap between the two CCD chips, as well to simplify the identification and removal of hot pixels and cosmic rays. In addition to the WFC3/UVIS observations, we used the Wide Field Camera (WFC) of ACS in parallel to obtain images $\approx 6'$ from the cluster center. These ACS images have been taken with the F435W, F656N and F814W filters and provide valuable information of the stellar content and star formation history in the underlying LMC field, permitting us to establish in detail the field star contamination fraction in each region of the CMDs. To reduce the images we followed the method described in \citet{kali+12}. Briefly, we started from the \emph{flt} files provided by the HST pipeline, which constitute the bias-corrected, dark-subtracted and flat-fielded images. After correcting the \emph{flt} files for charge transfer inefficiency using the dedicated CTE correction software\footnote{http://www.stsci.edu/hst/wfc3/tools/cte\_tools}, we generated distortion-free images using MultiDrizzle \citep{fruchook02} and we calculated the transformation between the individually drizzled images in each filter, linking them to a reference frame (i.e., the first exposure). Through these transformations we obtain an alignment of the individual images to better than 0.02 pixels. After flagging and rejecting bad pixels and cosmic rays from the input images, we created a final image for each filter, combining the input undistorted and aligned frames. The final stacked images were generated at the native resolution of the WFC3/UVIS and ACS/WFC (i.e., 0\farcs040 pixel$^{-1}$ and 0\farcs049 pixel$^{-1}$, respectively). \begin{table} \begin{center} \begin{tabular}{ccccc} \hline \hline R & F438W & F555W & F658N & F814W \\ (1) & (2) & (3) & (4) & (5)\\ \hline $R < R_c$ & 22.68 & 22.50 & 21.56 & 22.06\\ $R_c < R < 2\cdot R_c$ & 24.78 & 24.52 & 22.33 & 23.88\\ $R > 2\cdot R_c$ & 26.55 & 26.48 & 22.60 & 25.76\\ \hline \end{tabular} \caption{50\% completeness limit for each bands in three different intervals in radius. (1): Radius interval (2-5): magnitudes in each band corresponding to 50\% completeness fraction.} \label{t:compl} \end{center} \end{table} To perform the stellar photometry, we used the stand-alone versions of the DAOPHOT-II and ALLSTAR point spread function (PSF) fitting programs \citep{stet87,stet94} on the stacked images. To obtain the final catalog we first performed the aperture photometry on all the sources that are at least 3$\sigma$ above the local sky, then we derived a PSF from $\sim$ 1000 bright isolated stars in the field, and finally we applied the retrieved PSF to all the sources detected through the aperture photometry. We retained in the final catalogs only the sources that were iteratively matched between the two images and we cleaned them eliminating background galaxies and spurious detections by means of $\chi^2$ and sharpness cuts from the PSF fitting. Photometric calibration has been performed using a sample of bright isolated stars to transform the instrumental PSF-fitted magnitudes to a fixed aperture of 10 pixels (0\farcs40 for WFC3/UVIS, 0\farcs49 for ACS/WFC). We then transformed the magnitudes into the VEGAMAG system by adopting the relevant synthetic zero points for the WFC3/UVIS and ACS/WFC filters. Positions and magnitudes, with the associated errors, for the first ten objects in our final catalog are reported in Table~\ref{t:phot} in the Appendix~\ref{s:photometry}. We characterized the completeness as well as the photometric error distribution of the final photometry by performing artificial star tests, using the standard technique of adding artificial stars to the images and running them through the photometric routines that were applied to the drizzled images using identical criteria. We added to each image a total of nearly 700000 artificial stars. To not induce incompleteness due to crowding in the tests themselves, the fraction of stars injected into a given individual image was set to be $\sim$ 5\% of the total number of stars in the final catalogs. The overall distribution of the inserted artificial stars was chosen to follow that of the stars in the image. We distributed the artificial stars in magnitude in order to reproduce a luminosity function similar to the observed one and with a colour distribution that span the full colour ranges found in the CMDs. After inserting the artificial stars, we applied to each image the photometry procedures described above. The stars were recovered blindly and automatically cross-matched to the input starlist containing actual positions and fluxes; they were considered recovered if the input and output magnitudes agreed to within 0.75 mag in both filters. Finally, we assigned a completeness fraction to each individual star in a given CMD as a function of its magnitude and distance from the cluster center. The completeness fraction of stars as a function of the magnitude F438W and distance from the cluster center, in three different intervals in radius, are shown in Figure~\ref{f:completeness}. The magnitudes corresponding to the 50\% completeness fraction in each band, for the same intervals, are reported in Table~\ref{t:compl}. \section{Colour-Magnitude Diagram Analysis} \label{s:cmd} \begin{figure} \includegraphics[width=1\columnwidth]{fig2.eps} \caption{Observed F438W vs.\ F438W $-$ F814W CMD for all the stars inside the core radius, based on the \citet{king62} model fit derived as described in Section~\ref{s:kingmodel}. Contamination from the underlying LMC field population has been derived from a region near the corner of the image, with the same surface area adopted for the cluster stars, and superposed on the cluster CMD (green open squares). Magnitude and colour errors, derived using the photometric distribution of our artificial stars within the core radius are shown in the left side of the CMD. The reddening vector is also shown for $A_V = 0.4$.} \label{f:cmdobs} \end{figure} \subsection{A wide main sequence turn-off region} \label{s:eMSTO} Figure~\ref{f:cmdobs} shows the F438W $-$ F814W vs.\ F438W CMD of NGC~1856, plotting only the stars within the core radius, based on the \citet{king62} model fit derived as described in Section~\ref{s:kingmodel}. The observed CMD presents some interesting features. While the lower part of the MS (i.e., below the ``kink'' at F438W $\simeq$~21.2) is quite narrow and well defined, the MSTO region (with F438W $\la$~19.5), appears relatively wide when compared with the photometric errors. To determine whether the observed broadening of the MSTO region is a ``real'' feature in the CMD, we first test whether it could be caused by the following effects: contamination by the underlying field population, poor photometry (i.e.; large photometric uncertainties), the presence of differential reddening, and/or a significant binary fraction. To assess the level of contamination of the underlying LMC field population, we selected a region near the corner of the image with the same surface area adopted for the cluster stars. Stars located in this background region have been superposed on the cluster's CMD (shown as green open squares in Figure~\ref{f:cmdobs}). The contamination is mainly confined to the lower (faint) part of the MS and does not affect the MSTO and the Red Clump (RC) in any significant fashion; thus we can conclude that underlying LMC field stars do not cause the observed shape of the MSTO or RC regions. Photometric uncertainties have been derived from the artificial stars test. Magnitude and color errors are shown in Figure~\ref{f:cmdobs}; photometric errors at the MSTO level are between 0.02 and 0.04 mag, far too small to account for the broadening of the MSTO. \begin{figure} \includegraphics[width=1\columnwidth]{fig3.eps} \caption{CMDs for the 9 subregions in which we divided the cluster field, derived as described in Section~\ref{s:cmd}. Best-fit isochrones (black lines) from \citet{mari+08} are superposed in each CMD, along with the derived visual extinction $A_V$. The top legend shows the age, distance modulus $(m-M)_0$ and metallicity adopted for all the isochrones.} \label{f:cmdreg} \end{figure} \subsection{The impacts of differential reddening and binary fractions} \label{s:difred} \begin{figure} \hspace{-0.75truecm} \includegraphics[width=9truecm]{fig4.eps} \caption{Spatial distribution of the differential reddening in NGC~1856. The color of each star represents the final $\Delta E(B-I)$ derived from the differential reddening correction inside the core radius region. The color coding is shown at the top.} \label{f:map} \end{figure} To check for the presence of differential reddening in the cluster field we adopted the following approach. First of all, we selected a square region that circumscribes the area defined by the core radius (i.e.; the length of the side of the square is equal to twice the core radius), we divided this region into 9 equal-sized squares, and we compared the CMDs derived in each of them. The number of equal-sized region has been arbitrarily choosen in order to have a good sampling of the different region mantaining a statistically significant number of objects in each field. This approach has been adopted only to check if differential reddening effects are present in the cluster field and to have a rough estimate of the reddening variation. Figure~\ref{f:cmdreg} shows the CMDs for the 9 subregions; to yield a preliminary hint on the amount of differential reddening, we superimposed on each CMD an isochrone from \citet{mari+08} for which age, distance modulus and metallicity are fixed (see top legend of Figure~\ref{f:cmdreg}), while the visual extinction $A_V$ is left free to vary. Figure~\ref{f:cmdreg} clearly shows that the best fit is achieved in each CMD using a different value of the visual extinction $A_V$ (derived values of $A_V$ are reported in each field), confirming that a differential reddening is present in the cluster field, with variations of the order of $\sim \pm$ 0.15 mag with respect to the mean reddening value. Once verified that the differential reddening is truly present in the cluster field, we corrected each star in the CMD using the following approach \cite[for a more detailed description, see][]{milo+12}. Briefly, we first defined a photometric reference frame in which the X axis is parallel to the reddening line. In this reference system, it is much easier to determine reddening difference rather than in the original colour-magnitude plane, where the reddening vector is an oblique line. To do this, we arbitrarily defined an origin {\it O}, then translated the CMD such that the origin of the new reference frame corresponds to {\it O} and then we rotated the CMD counterclockwise by an angle defined by the equation: \begin{equation} \theta = \arctan \left( \frac{A_{F438W}}{A_{F438W} - A_{F814W}} \right) \end{equation} where $A_{F438W}$ and $A_{F814W}$ are the appropriate extinction coefficients for the UVIS WFC3 filters. Using \citet{card+89} and \citet{odon94} extinction curve and adopting $R_V = 3.1$, their values are $A_{F438W}$ = 1.331$\cdot A_V$ and $A_{F814W} $ = 0.610$\cdot A_V$, respectively. At this point, we generated a fiducial line using only MS stars. We divided the sample into bins of fixed ``magnitude'' and we calculated the medians in the X- and Y-axis. The use of the median allows us to minimize the influence of outliers such as binary stars or stars with poor photometry left in the sample after the selection. Among the MS stars, we selected a subsample located in the region where the reddening line defines a wide angle with the fiducial line, so that the shift in colour can be safely interpreted as an effect of the differential reddening. For this reason, we limited our sample to the central portion of the cluster MS, excluding the upper part near the MSTO and the lower portion, fainter than the magnitude at which the MS starts to bend in a direction parallel to the reddening line. For each selected star, we calculated the distance from the fiducial line along the reddening direction (i.e. $\Delta$ X) and we used these stars as reference stars to estimate the reddening variations associated to each star in the CMD. Finally, to correct them, we selected the nearest 30 reference stars and we calculate the median $\Delta$ X that is assumed as the reddening correction for that particular star (to derive the differential reddening suffered by the reference stars, we excluded that star in the computation of the median $\Delta$ X). We note that the median correlation length introduced with this approach is of the order of $\simeq$\,20 px (corresponding to 0.2 pc). After the median $\Delta$ X values have been subtracted from the X-axis value of each star in the rotated CMD, we obtained an improved diagram that we used to derive a more accurate selection of MS reference stars and a more precise fiducial line. We then applied again the described procedure, iterating the process until it converges (in this case, a couple of iterations were sufficient). At this point, the corrected X and Y ``magnitude'' were converted back to the F438W and F814W magnitudes. Figure~\ref{f:map} shows the spatial distribution of the differential reddening inside the core radius of NGC~1856; each star is reported with a different color, depending on the final $\Delta E({\rm F438W}-{\rm F814W})$ applied to correct it, derived with the method described above. In this context, we observe that the CMD in the central panel of Figure~\ref{f:cmdreg}, which represents the innermost region of the cluster, shows a quite wide MSTO, compared to the other panels. Conversely, in the corresponding spatial region from which the CMD is drawn (i.e., the central part in the map of Figure~\ref{f:map}) the derived reddening is quite uniform, with $\Delta E({\rm F438W}-{\rm F814W}) \simeq$ 0.04 (corresponding to $\Delta A_V \simeq$ 0.055). Moreover, it is worth to note that the reddening correction reaches its highest accuracy in this inner region, due to the high surface number density of stars (i.e., smaller distance of the star to be corrected from the reference stars used to derive $\Delta A_V$, see also Section~\ref{s:ssp1}). \begin{figure*} \includegraphics[width=11cm]{fig5.eps} \caption{Differential-reddening-corrected F438W - F814W versus F438W CMD for NGC~1856. Best-fit isochrones from \citet{mari+08}, for the minimum (red solid line) and maximum (blue dashed line) ages (300 and 410 Myr, respectively) that can be accounted for by the data are superposed on the cluster CMD, along with the derived distance modulus $(m-M)_0$ and visual extinction $A_V$. The parallelogram box used to select MSTO stars in order to derive the pseudo-age distribution, as described in Section~\ref{s:mcsim}, is also shown. } \label{f:cmdcorr} \end{figure*} We acknowledge that our differential reddening correction might be somewhat ineffective for a \emph{small} fraction of stars, depending on their location in the field, e.g., in the outer regions, where the distance between stars is generally larger than in the inner regions. However, the number of stars in such regions is by nature relatively low (see, e.g., the CMDs in the corner panels of Figure~\ref{f:cmdreg}). Thus, only a small fraction of MSTO stars can suffer from a less accurate correction, leaving the global morphology of the MSTO region unaltered. The differential-reddening-corrected (DRC) CMD is shown in Figure~\ref{f:cmdcorr}. The correction caused the lower part of the MS to be even more defined and narrow than it was in the observed CMD. Conversely, the MSTO region and upper MS is still fairly wide, although it has narrowed down slightly. This is illustrated in more detail in Figure~\ref{f:cfr_obs} which shows the distribution of F438W\,$-$\,F814W colours in the magnitude range $18.7 \leq {\rm F438W} \leq 19.2$\footnote{This magnitude range was selected in this context because it reflects the part of the upper MS for which the binary sequence is merged in with the single star sequence (see Fig.~\ref{f:cfr1}). Hence, our result is robust against any given binary fraction.} mag for the observed CMD and that obtained for the DRC CMD (red dashed and black solid lines, respectively). Note that the global shape and width of the two distributions is very similar, except for the fact that the observed one has more extended outer wings than the one derived from the DRC CMD. These extended wings are caused by stars that are located in the region where the differential reddening effects are stronger and then have been shifted from their original position in the CMD. The fact that the shape of the two distributions is very similar within their FWHM values suggests that the observed broadening in the MSTO region is an intrinsic feature, rather than due to differential reddening. Given the above, we judge it very unlikely that differential reddening effects or binary stars can explain the observed broadening of the MSTO region. \subsection{Isochrone fitting} \label{s:iso} Isochrone fitting was done following the methods described in detail in \citet[][hereafter G11b]{goud+11b} for the case of star clusters with ages that are too young to have a well-developed red giant branch. Briefly, age and metallicity were derived using the observed differences in magnitude and colour between the MSTO and the RC; we selected all the isochrones for which the values of those parameters lie within $2\sigma$ of the uncertainties of those parameters as derived from the CMDs. For the set of isochrones that satisfied our selection criteria (5\,--\,10 isochrones), we found the best-fit values for distance modulus and reddening by means of a least-square fitting program to the magnitudes and colors of the MSTO and RC. Finally, we overplotted the isochrones onto the CMDs and selected the best-fitting ones by means of a visual examination. In this context, we superposed on the cluster CMD two isochrones from \citet{mari+08} of different ages (300 Myr and 410 Myr, respectively), shown in Figure~\ref{f:cmdcorr} with different colours and line styles (red solid line for the younger isochrone and blue dashed line for the older one, respectively). The isochrones ages were chosen to match the minimum and maximum age that can be accounted for by the data and isochrone fitting performed as described above. Taking the results shown in Sections \ref{s:eMSTO}\,--\,\ref{s:iso} at face value, it seems that the morphology of the MSTO is not very well reproduced by a SSP, and that a spread in age of the order of $\sim$ 100 Myr may constitute a better fit to the data. This is further addressed below (Section~\ref{s:ssp1} and Section~\ref{s:ssp2}). \section{Monte Carlo Simulations} \label{s:mcsim} \begin{figure} \includegraphics[width=1\columnwidth]{fig6.eps} \caption{Comparison between the colour distributions obtained from the differential-reddening-corrected CMD (black solid line) and the observed CMD (red dashed line).} \label{f:cfr_obs} \end{figure} To further determine whether or not a single population can reproduce the observed CMD, we conducted Monte Carlo simulations of a synthetic cluster with the properties implied by the isochrone fitting (see G11a and G14 for a detailed description of the method adopted to produce these simulations). Briefly, to simulate a SSP with a given age and chemical composition, we populated an isochrone with stars randomly drawn using a Salpeter mass function and normalized to the observed (completeness-corrected) number of stars. To a fraction of these sample stars, we added a component of unresolved binary stars derived from the same mass function, using a flat distribution of primary-to-secondary mass ratios. The binary fraction was estimated by using as a template the part of the observed MS between the MSTO region and the TO of the background population in the magnitude range $22.5 \la F438W \la 19.5$ (see Fig.~\ref{f:cmdobs}). We derived that the value of the binary fraction that best fit the data is of $\simeq$ 25\%. We estimated that the internal systematic uncertainty in the binary fraction is $\simeq$\,5\%; for the purposes of this work, the results do not change significantly within 10\% of the binary fraction. Finally, we added photometric errors that were derived using our artificial star tests. We derived three different sets of synthetic CMDs: one with a single SSP of age 300 Myr, and two combining two SSPs of different ages (one with SSPs with ages of 300 Myr and 380 Myr, and the other with ages of 300 Myr and 410 Myr, respectively). In the last two cases, we derived a set of simulations in which we changed the ratio of the number of stars in the two populations, from 10\% to 90\% of stars in the younger SSP, with a 5\% step increase in each run. To compare in detail the observed MSTO region with the simulated ones, we created ``pseudo-age'' distributions (see G11a for a detailed description). Briefly, pseudo-age distributions are derived by constructing a parallelogram across the MSTO, with one axis approximately parallel to the isochrones and the other approximately parallel to them (the adopted parallelogram is shown in Figure~\ref{f:cmdcorr}). The (F438W\,$-$\,F814W, F438W) coordinates of the stars within the parallelogram are then transformed into the reference coordinate frame defined by the two axis of the parallelogram and the same procedure is applied to the isochrones tables to set the age scale along this vector. The pseudo-age distributions are calculated using the non-parametric Epanechnikov-kernel density function \citep{silv86}, in order to avoid possible biases that can arise if fixed bin widths are used. Finally, a polynomial least-squares fit between age and the coordinate in the direction perpendicular to the isochrones yields the final pseudo-age distributions. The described procedure is applied both to the observed and the simulated CMD. In the following sections, we describe the results obtained from the comparison of the observed and simulated pseudo-age distributions, for the synthetic CMDs obtained with a single SSP and for those obtained from the combination of two SSPs of different ages. \subsection{Comparison with the synthetic CMD of a SSP} \label{s:ssp1} \begin{figure} \includegraphics[width=1\columnwidth]{fig7a.eps}\\ \includegraphics[width=1\columnwidth]{fig7b.eps} \caption{Top panel: comparison between DRC (gray dots) and simulated CMDs (red circles, single stars, magenta square, binary stars, respectively). The simulated CMD is obtained from Monte Carlo simulation of a SSP with an age of 300 Myr. The parallelogram box used to select MSTO stars and the fraction of the adopted binary stars are also reported. Bottom panel: Pseudo-age distribution for the MSTO region of the DRC (black solid line) and simulated (red dashed line) CMDs.} \label{f:cfr1} \end{figure} The top panel of Figure~\ref{f:cfr1} shows the comparison between the observed and simulated CMD, the latter obtained from a single SSP with an age of 300 Myr. Overall, the SSP simulation reproduces the CMD features quite well in the fainter portion of the cluster MS (i.e.; for F438W $\ga$ 20.5 mag). Conversely, we note that the MSTO region\footnote{i.e., the part of the MS with F438W $\la$ 19.2 mag, where the binary sequence has joined the single star sequence in F438W\,$-$\,F814W} of the simulated SSP does not reproduce the observed MSTO region very well, in that the latter is wider than the former. Furthermore, the faint end of the RC also does not seem to be reproduced well by the simulated SSP in that the simulated luminosity function of RC stars peaks more strongly at bright magnitudes than the observed one. Note that this is consistent with a fraction of stars in NGC~1856 having ``older'' ages than that of the best-fit SSP (see for reference the shape and location of the RC for the older isochrone in Figure~\ref{f:cmdcorr}). This difference is even more evident in a comparison of the observed and simulated pseudo-age distributions (see bottom panel of Figure~\ref{f:cfr1}). Indeed, the observed pseudo-age distribution (shown as a black solid line in Figure~\ref{f:cfr1}) reaches significantly older ages than the simulated one (red dashed line in Figure~\ref{f:cfr1}), while the two distributions are very similar to each other in the left (``young'') half of the respective profiles. To quantify the difference between the pseudo-age distribution of the cluster data and that of the SSP simulation in term of {\it intrinsic} MSTO width of the cluster, we measured the widths of the two sets of distributions at 50\% of their maximum values (hereafter called FWHM), using quadratic interpolation. The intrinsic pseudo-age range of the cluster is then estimated by subtracting the simulation width in quadrature: \begin{equation} {\it FWHM}_{\rm MSTO} = ({\it FWHM}^{2}_{\rm obs} - {\it FWHM}^{2}_{\rm SSP})^{1/2} \label{eq:fwhm} \end{equation} where the ``obs'' subscript indicates a measurement on the DRC CMD and the ``SSP'' subscript indicates a measurement on the simulated CMD for the SSP. With ${\it FWHM}_{\rm obs}$ = 269 Myr and ${\it FWHM}_{\rm SSP}$ = 198 Myr, we obtain ${\it FWHM}_{MSTO} \approx$ 182 Myr, which is similar to the value of ${\it FWHM}_{\rm SSP}$. This suggests that NGC~1856 may host two SSPs separated in age by about ${\it FWHM}_{MSTO}/2 \sim$\,90 Myr (see Sect.~\ref{s:ssp2} below). In this context, we adopted equation~\ref{eq:fwhm} also to derive the value of reddening variation necessary to reproduce the observed pseudo-age distributions assuming that a cluster is formed by a SSP. Instead of age, we derived the FWHM of the observed and simulated pseudo-age distributions as a function of F438W - F814W; using the appropriate extinction coefficient for the UVIS WFC3 filters we transformed the ``$\Delta$ color'' in $\Delta A_V$ obtaining the following value: $\Delta A_V$ = 0.27. The derived $\Delta A_V$ is significantly larger than the reddening variations observed in the spatial differential reddening map in the innermost region of the cluster. Moreover, we note that due to the uniform shape of the observed pseudo-age distribution, this large reddening should affect a significant number of stars, making the possibility that the reddening can mimic the presence of a multiple population even less plausible. Therefore, taking these results at face value, they seem to suggest that a SSP is not able to reproduce the observed morphology of the MSTO region. \begin{figure} \includegraphics[width=1\columnwidth]{fig8.eps} \caption{Comparison between the observed pseudo-age distribution (from the DRC CMD, black solid line) and the ones obtained from the combination of two SSPs with different ages (300 Myr and 410 Myr, top panels; 300 Myr and 380 Myr, bottom panels) shown as red dashed lines. In each panel we report the adopted ages of the two SSPs and the number ratio between the younger SSP and the old one. Two-tail {\it p} values obtained for the K-S test are also shown.} \label{f:cfr2} \end{figure} \subsection{Comparison with a synthetic CMD of two SSPs of different age} \label{s:ssp2} \begin{figure*} \includegraphics[width=1\columnwidth]{fig9a.eps} \includegraphics[width=1\columnwidth]{fig9b.eps} \caption{Left panels: DRC CMDs zoomed in the upper portion of the MS with superimposed isochrones (red solid and blue dashed lines for the young and the old isochrones, respectively) from \citet{mari+08} along with the adopted ages (left panel: 300 and 410 Myr. Right panel: 300 and 380 Myr). Dotted lines represent the magnitude and colour cuts adopted to select RC stars, whereas the dotdashed lines represent the magnitude cut, used to divide the two parts of the RC. Right panels: ratios between the stars in the upper portion of the RC and their total number as a function of the ratio of the young SSP (top panel: SSPs with ages of 300 and 410 Myr; bottom panel: SSPs with ages of 300 Myr and 410 Myr). The ratio obtained in the observed DRC CMD is reported as a dashed line in both panels. The typical uncertainty ($\approx$ 10\%) in the calculated number ratio is also shown.} \label{f:countRC} \end{figure*} As stated in Section~\ref{s:mcsim}, we also simulated synthetic CMDs combining two SSPs with different ages. One set is obtained using SSPs with ages of 300 Myr and 410 Myr and the other with ages of 300 Myr and 380 Myr. For both cases, we derived a set of simulations in which we varied the ratio of the number of stars in the young and the old SSP, from 10\% to 90\% in each of them, with a step of 5\%. For each simulated CMD we obtained the corresponding pseudo-age distribution that we compared with the observed one. The fact that the stars from the two SSPs are mixed together in the parallelogram used to derive the pseudo-age distribution, combined with our purely statistical approach, cause an increase in the ``free parameters'' and introduces a sort of degeneracy in the derived results. This is reflected in the fact that we obtained more than one simulated pseudo-age distribution that reasonably reproduces the observed one. These ``best-fitting'' pseudo-age distributions are shown in Figure~\ref{f:cfr2}. In detail, the top panels show the best fit achieved from the SSPs with ages of 300 Myr and 410 Myr (with mass fractions of ``young'' stars of 40\% and 50\%, respectively), whereas in the bottom panels we show the ones obtained from the SSPs with ages of 300 Myr and 380 Myr (with mass fractions of young stars of 40\% and 55\%, respectively) The ages of the two SSPs and the ratio between them have been reported in each panel. We note that the best-fit mass ratios of the young to the old population seem to be around 1:1, consistent with the observed pseudo-age distribution, where we see a hint of two peaks with a similar maximum value and hence a likely number ratio of $\approx$\,1:1 for young vs.\ old stars in the cluster. To constrain which combination of two SSPs provides the best solution to reproduce the observed CMD of NGC~1856, we performed two-sample Kolmogorov-Smirnov (K-S) tests. The two-tailed {\it p} values are mentioned in Figure~\ref{f:cfr2} for each pseudo-age distribution. Taking these results at face value, it seems that the combination of SSPs with ages of 300 and 380 Myr provide a better solution with respect to the SSPs with ages of 300 and 410 Myr. To further constrain the two-SSP fits, we pointed our attention to the distribution of RC stars, adopting the following approach. The left panels of Figure~\ref{f:countRC} show the DRC CMDs zoomed in to the RC and the upper portion of the MS, with superimposed isochrones from \citet{mari+08}, with the same ages as those adopted in the SSPs (left sub-panel: 300 and 410 Myr; right sub-panel: 300 and 380 Myr, respectively). First, we selected cluster RC stars by applying colour and magnitude cuts (shown as dotted lines in the left panels of Figure~\ref{f:countRC}). Then, we divide RC stars in two parts, applying a magnitude threshold (shown as dotdashed lines in Figure~\ref{f:countRC}) coinciding with the brightest point of the RC of the ``old'' isochrone (i.e., F438W = 19.3 mag for the 410 Myr isochrone and F438W = 19.15 mag for the 380 Myr isochrone). Finally, we counted the stars in the upper and lower portion of the RC, both in the observed data and in each synthetic CMD obtained from the associated Monte Carlo simulations. The ratios between the stars in the upper portion of the RC and their total number as a function of the mass ratio of the young SSP in each simulation are shown in the right panels of Figure~\ref{f:countRC} (for SSPs with ages of 300 Myr and 410 Myr in the top panel and for SSPs with ages of 300 Myr and 380 Myr in the bottom panel, respectively). The ratios from the DRC CMD, obtained with the different magnitude cuts, are shown as dashed lines. The typical uncertainty on the mass ratios has been estimated to be of order 10\%, shown as an error bar in both panels. Taking these results at face value, it seems that the best match is obtained from the synthetic CMD obtained from the SSPs with ages of 300 Myr and 380 Myr and with a fraction of young stars of 0.55\,$\pm$\,0.10 (cf.\ bottom right panel in Figure~\ref{f:cfr2}). Globally, the results presented in Section~\ref{s:ssp1} and \ref{s:ssp2} show that a single SSP fails in reproducing the observed pseudo-age distribution while a combination of two SSPs with different ages can provide a very good fit. This seems to indicate that a spread in age may be present in the cluster and can not be categorically excluded as argued by \citet{bassil13}. \section{Constraints on ongoing star formation activity} \label{s:excess} \begin{figure} \includegraphics[width=1\columnwidth]{fig10.eps} \caption{Top panels: F555W - F814W vs F555W - F658N colour-colour diagram for the stars inside the core radius (left panel) and in the control field (right panel), representing the contamination from the background LMC stars (same field adopted in Figure~\ref{f:cmdobs}). The dashed line represents the median F555W - F658N colour, representative of stars with no $H_{\alpha}$ excess. Stars with a F555W - F658N colour at least $5 \sigma$ above that reference line, where $\sigma$ is the uncertainty on the F555W - F658N colour of the star, are reported as open red squares. Those among them with $EW_{H_{\alpha}} > 10 \AA$, where the equivalent width of the $H_{\alpha}$ emission line ($EW_{H_{\alpha}}$) is calculated using the Eq.~4 of \citet{dema+10} are reported as solid blue circles. Bottom Panels: CMDs of the cluster and of the control field with overplotted stars selected from the F555W - F814W vs F555W - F658N colour-colour diagram (solid blue circles).} \label{f:excess_ha} \end{figure} To study whether NGC~1856 \emph{presently} hosts ongoing star formation, we performed a search for PMS stars by means of narrow-band imaging of the H$\alpha$ line. H$\alpha$ emission is a good indicator of the PMS stage: the presence of strong H${\alpha}$ emission ($EW_{H{\alpha}} > 10$ \AA) in young stellar objects is normally interpreted as a signature of the mass accretion process onto the surface of the object that requires the presence of an inner disk \citep[see][and reference therein]{feimon99,whibas03}. To do this, we used the method described in \citet{dema+10}, which combines \emph{V} and \emph{I} broad-band photometry with narrow-band $H{\alpha}$ imaging to identify all the stars with excess $H{\alpha}$ emission and to measure their $H{\alpha}$ luminosity. In the top panels of Figure~\ref{f:excess_ha} we show the F555W\,$-$\,F814W vs.\ F555W\,$-$\,F658N colour-colour diagram for the stars inside the cluster core radius (left top panel) and for the stars in the ``control field'', the same field we used to derive the contamination by the background LMC population in Figure~\ref{f:cmdobs} (right top panel). We used the median F555W\,$-$\,F658N colour of stars with small ($<$ 0.05 mag) photometric uncertainties in each of the three F555W, F814W, and F658N bands, as a function of F555W\,$-$\,F814W, to define the reference template with respect to which excess $H{\alpha}$ emission is identified (shown by the dashed line in Figure~\ref{f:excess_ha}). We selected a first candidate sample of stars with $H{\alpha}$ excess emission by considering all those with a F555W\,$-$\,F658N color at least $5 \sigma$ above that reference line, where $\sigma$ is the uncertainty on the F555W\,$-$\,F658N colour of the star in question (shown as open red squares in Figure~\ref{f:excess_ha}). Then we calculated the equivalent width of the $H{\alpha}$ emission line ($EW_{H{\alpha}}$) from the measured color excess, using the following equation from \citet{dema+10}: \begin{equation} W_{eq}(H{\alpha}) = RW \times [1 - 10^{-0.4 \times (H{\alpha}-H{\alpha}^c)}] \label{eq:ha_ex} \end{equation} where RW is the rectangular width of the filter (similar in definition to the equivalent width of the line), which depends on the characteristic of the filter \citep[for the adopted F658N filter, Rw = 17.68; see Table~4 in][]{dema+10}. Finally, we considered as bona-fide candidate PMS stars those objects with $EW_{H{\alpha}} > 10$ \AA\ \citep{whibas03}, shown as solid blue circles in Figure~\ref{f:excess_ha}. The number of objects that show excess $H{\alpha}$ emission is very low and is roughly the same in both fields. In the bottom panels of Figure~\ref{f:excess_ha}, we overplotted these objects (shown as solid blue circles) on the F438W\,$-$\,F814W vs.\ F438W CMDs of the cluster and the control field; the majority of them are located on the fainter part of the CMD, where the photometric errors are larger and the contamination by background LMC stars is higher. In fact the number of objects that show an $H{\alpha}$ excess is comparable within the errors in the two fields below the TO of the background stellar population (i.e. F438W $\la$ 23 mag). Hence, the number and the location of these objects in the CMDs suggest that they can be considered spurious detections. For what concerns the handful of objects at brighter magnitudes (i.e. F438W $\ga$ 22 mag), where the photometric errors are smaller and the contamination is lower, we hypothesize that these can be stars in a binary system in which mass transfer between the primary and the secondary star is occurring. We exclude the possibility that these objects are true PMS stars due to the fact that, in this case, we would have observed a significant number of these objects at fainter magnitudes (i.e. lower masses and hence longer PMS lifetime). In this context, it is worth to note that our magnitude detection limit for stars showing $H{\alpha}$ excess is around F438W $\simeq$ 26.5 mag, which, at the cluster age, corresponds to stars with masses of $\simeq$ 0.8 M$_{\odot}$. The PMS lifetime for such stars is of the order of $\sim$ 120 Myr, indicating that if star formation is occuring within the cluster, with our method we should be able to observe a significant number of PMS objects. Hence, these results indicate that the level of ongoing star formation activity is negligible in NGC~1856. In fact, the lack of low-mass stars showing an $H{\alpha}$ excess indicates that the star formation in the cluster must have stopped at least $\sim$ 120 Myr ago, in agreement with the conclusions derived from the pseudo-age distribution analysis. \section{Constraints to the stellar rotation scenario in producing eMSTO's} \label{s:rotation} \begin{figure} \includegraphics[width=1\columnwidth]{fig11.eps} \caption{Comparison between the non-rotating isochrone (blue solid line), from \citet{mari+08}, and its rotating counterpart (rec dashed line), derived as described in Section~\ref{s:rotation}. Isochrones are superposed on the cluster DRC CMD, along with adopted age, distance modulus $(m-M)_0$ and visual extinction $A_V$.} \label{f:cmdrot} \end{figure} In the literature, some alternative explanations that do not invoke the presence of an extended star formation, have been proposed to explain the eMSTO phenomenon in intermediate-age star clusters. In particular, the so called ``stellar rotation'' scenario \citep{basdem09,yang+13} suggest that eMSTOs can be explained by a spread in rotation velocity among turn-off stars. \citet{yang+13} calculated evolutionary tracks of non-rotating and rotating stars for three different initial stellar rotation periods (approximately 0.2, 0.3 and 0.4 times the Keplerian rotation of ZAMS stars), and for two different rotational mixing efficiencies (``normal'', $f_c = 0.03$ and ``enhanced'', $f_c = 0.20$). From the isochrones, built from these tracks, they calculated the widths of the MSTO region caused by stellar rotation as a function of cluster age and translated them to age spreads. In particular, Figure~7 in \citet{yang+13} shows the equivalent width of the MSTO of star clusters caused by rotation as a function of the cluster age, for different initial stellar rotation periods and rotational mixing efficiencies. At the age of NGC~1856, all the Yang et al.\ models with ``normal'' mixing efficiency (i.e.; $f_c = 0.03$) indicate that rotation does not cause any appreciable spread. On the other hand, for their models with ``enhanced'' mixing efficiency (i.e.; $f_c = 0.20$), the predictions vary from no spread for the model with the longest initial stellar rotation period to a maximum of $\sim$ 100 Myr for the model with the shortest period (i.e., the model for the highest rotation velocity). In this case their model predicts a ``negative'' spread, in the sense that the rotating model exhibits a bluer MSTO with respect to their non-rotating counterpart, thus mimicking the presence of a younger population \citep[see top left panel of Figure~6 in][]{yang+13}. To test the prediction of this rotating model in the specific case of NGC~1856, we used the 410 Myr isochrone from \citet{mari+08} as a template for non-rotating stars (i.e., the same isochrone as that plotted in Figure~\ref{f:cmdcorr}), and we derived its rotating counterpart. To do so, we calculated the shifts in colour and magnitude between the rotating and non-rotating isochrones in Figure~6 of \citet{yang+13}, we then transformed these shifts in our photometric system and we applied them to the \citet{mari+08} isochrone. Figure~\ref{f:cmdrot} shows the DRC CMD of NGC~1856 with superimposed non-rotating and rotating isochrones (blue solid and red dashed line, respectively). For the rotating isochrone, we plotted only the MS part, since the post-MS era of stellar evolution is not addressed by the \citet{yang+13} models. Overall, the rotating model plotted in Figure~\ref{f:cmdrot} seems to reproduce the observed CMD reasonably well in the upper portion of the MS, whereas the bottom portion of the MS is not as well reproduced, the rotating model being somewhat too red with respect to the observed MS. The latter is due to the fact that the rotating model bends toward red colors (lower temperatures) for stellar masses ${\cal{M}} \la$ 2.0 M$_{\odot}$. In conclusion, the rotating model of \citet{yang+13} that involves ``enhanced'' rotational mixing efficiency seems to provide a satisfactory fit to the observed MS of NGC~1856, indicating that a spread in rotation velocity, beside an age spread, could be a possible cause of the observed broadening of the MSTO region. However, we note that the predictions of the \citet{yang+13} models are \emph{not} consistent with the observations for intermediate-age ($\sim$ \,1--\,2 Gyr) star clusters featuring eMSTOs. This is \emph{especially the case for the rotating models involving enhanced rotational mixing efficiency} (see discussion in G14, in particular their Figure~7). It should however also be recognized that the study of the creation of theoretical stellar tracks and isochrones for rotating stars at various stages of stellar evolution, rotation rates, and ages is still in relatively early stages. To date, no stellar rotation velocity measurements have yet been undertaken in young and intermediate-age star clusters in the Magellanic Clouds. We strongly encourage such studies in the future in order to provide fundamental improvements of our understanding of the possible relation between the eMSTO phenomenon and the effects of stellar rotation. \section{Insights from Dynamical Analysis} \label{s:dynamics} As mentioned in the Introduction, a peculiar characteristic of eMSTOs in intermediate-age star clusters in the Magellanic Clouds is that they are not hosted by all such star clusters. To explain this phenomenon in the context of multiple stellar populations, G11a proposed a scenario in which eMSTOs can only be hosted by clusters for which the escape velocity of the cluster is higher than the wind velocities of ``polluter'' stars thought to provide the material out of which the second stellar generation is formed, at the time such stars were present in the cluster (we refer to this as ``the escape velocity threshold'' scenario). This scenario was developed further by G14 who studied HST photometry of a sample of 18 intermediate-age (1\,--\,2 Gyr) star clusters in the Magellanic Clouds that covered a variety of masses and core radii. They found that all the clusters showing an eMSTO feature escape velocities $V_{\rm esc} \ga$ 15 \kms\ out to ages of at least 100 Myr. This age is equivalent to the lifetime of stars of $\approx 5\;M_{\odot}$ \citep[e.g.,][]{mari+08}, and hence old enough for the slow winds of massive binary stars and IM-AGB stars of the first generation to produce significant amounts of ``polluted'' material out of which second-generation stars may be formed. Furthermore, C14 showed that this threshold on $V_{\rm esc}$ is consistent with HST observations of four low-mass intermediate-age star clusters ($\approx 10^4\; M_{\odot}$): the two clusters that host eMSTOs have $V_{\rm esc} \ga$ 15 \kms\ out to ages of $\sim$\,100 Myr, whereas $V_{\rm esc} \la$ 12 \kms\ at all ages for the two clusters \emph{not} exhibiting eMSTOs. Hence, the critical escape velocity for a star cluster to be able to retain the material ejected by first-generation polluter stars seems to be in the range of 12\,--\,15 \kms. This threshold is consistent with wind velocities of IM-AGB stars and massive binary stars (see G14 for a detailed discussion). Briefly, IM-AGB stars show wind velocities in the range 12\,--\,18 \kms \citep{vaswoo93,zijl+96}, whereas observed velocities of ejecta of massive binary stars are in the range 15\,--\,50\footnote{We note that this measure have been derived from one system, RY Scuti, which is in a specific case of stable mass transfer, whereas most of the mass is ejected during the unstable phase. Lower velocities are expected during unstable mass transfer or the ejection of a common envelope.} \kms \citep{smit+02,smit+07}. With this in mind, following the results presented in the previous sections where the analysis of pseudo-age distributions seems to suggest the presence of multiple stellar populations in NGC~1856, we determined the structural parameters and the dynamical properties of the cluster from our new HST/WFC3 data. \subsection{Structural parameters} \label{s:kingmodel} \begin{figure} \hspace*{-0.5cm} \includegraphics[width=0.7\columnwidth,angle=270]{fig12.ps} \caption{Radial surface number density profile of NGC~1856. Black points represent observed values. The dashed line represents the best-fit King model (cf. equation~\ref{eq:King}) whose parameters are shown in the legend. Ellipticity and effective radius $r_e$ are also shown in the legend. The radius values have been converted to parsec by adopting the appropriate distance modulus.} \label{f:king} \end{figure} We determined the radial surface number density distribution of stars, following the procedure described in \citet{goud+09}. Briefly, we first determined the cluster centre to be at reference coordinate ($x_c$, $y_c$) = (2986, 1767) with an uncertainty of $\pm$ 5 pixels in either axis. In order to derive it, we first created a two-dimensional histogram of the pixel coordinates adopting a bin size of 50 $\times$ 50 pixels and then calculated the centre using a two-dimensional Gaussian fit to an image constructed from the surface number density values in the aforementioned two-dimensional histogram. This method avoids possible biases related to the presence of bright stars near the centre. We derived the cluster ellipticity $\epsilon$ running the task {\it ellipse} within IRAF/STSDAS\footnote{STSDAS is a product of the Space Telescope Science Institute, which is operated by AURA for NASA.} on the surface number density images. Finally, we derived radial surface number densities by dividing the area sampled by the images in a series of elliptical annuli, centered on the cluster, and accounting for the spatial and photometric completeness in each annulus. \begin{figure} \hspace*{+0.5cm} \includegraphics[width=0.75\columnwidth]{fig13.ps} \caption{Cumulative completeness-corrected radial distribution of bright versus faint stars. Red solid circles and solid line represent stars with F438W $<$ 20.5 mag while black open circles and dotted line represent stars with 21.0 $<$ F438W $<$ 23.5 mag.} \label{f:mass_seg} \end{figure} We only considered stars brighter than the 30\% completeness limit in the core region, corresponding to F438W $\leq$ 23.5 mag. The outermost data point is derived from the ACS parallel observations in a field located $\simeq$\,5\farcm5 from the cluster centre. The radial surface number density profile was fitted using a \citet{king62} model combined with a constant background level, described by the following equation: \begin{equation} n(r) = n_0 \: \left( \frac{1}{\sqrt{1 + (r/r_c)^2}} - \frac{1}{\sqrt{1+c^2}} \right)^2 \; + \; {\rm bkg} \label{eq:King} \end{equation} where $n_0$ is the central surface number density, $r_c$ is the core radius, $c \equiv r_t/r_c$ is the King concentration index ($r_t$ being the tidal radius), and $r$ is the geometric mean radius of the ellipse ($r = a\,\sqrt{1-\epsilon}$, where a is the semi-major axis of the ellipse). In Figure~\ref{f:king}, we show the best-fit King model, obtained using a $\chi^2$ minimization routine. We reported also the derived number density values along with other relevant parameters. Our derived core radius of $r_c$ = 3.18 ($\pm$ 0.12) pc is significantly larger than the literature value for NGC~1856 \citep[$r_c$ = 1.14 pc,][]{mclvan05}. In reconciling this difference, we note that our King-model fit was done using completeness-corrected surface number densities, whereas \citet{mclvan05} used surface brightness data to derive the structural parameters of the cluster. The latter method is sensitive to the presence of mass segregation in the sense that mass-segregated clusters will appear to have smaller radii when using surface brightness data than when using plain surface number densities. To check whether NGC~1856 is actually mass segregated, we derived the cumulative completeness-corrected radial distribution of bright and faint stars in the WFC3 image. ``Bright'' stars are selected by the magnitude cut F438W $<$ 20.5 mag, whereas ``faint'' stars are selected in the magnitude range 21.0 $<$ F438W $<$ 23.5 mag. The obtained cumulative radial distributions are shown in Figure~\ref{f:mass_seg}. Note that the bright stars are clearly more centrally concentrated than the faint ones, confirming that a significant degree of mass segregation is present in the cluster. \subsection{Dynamical evolution and cluster escape velocity} \label{s:dynevol} \begin{table*} \begin{center} \begin{tabular}{cccccccc} \hline \hline V & Aper. & Aper. corr. & [Z/H] & $A_V$ & Age & $r_c$ & $r_e$ \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline 10.06 $\pm$ 0.15 & 31 & 0.38 & $-$0.30 & 0.47 & 300 & 3.18 $\pm$ 0.12 & 8.00 $\pm$ 0.90 \\ \hline \end{tabular} \caption{Physical properties of the star cluster. Columns (1): Integrated-light $V$ magnitude from \citet{bica+96}. (2): Aperture radius in arcsec used for the integrated-light measurement. (3): Aperture correction in mag. (4): Metallicity in dex. (5): Visual extinction in magnitude. (6): Mean age in Myr. (7): Core radius in pc. (8): Effective radius in pc.} \label{t:parameters} \end{center} \end{table*} \begin{table*} \begin{center} \begin{tabular}{|ccc|cccc} \hline \hline \multicolumn{3}{|c|}{log (${\cal{M}}_{\rm cl}/M_{\odot}$)} & \multicolumn{4}{|c}{$V_{\rm esc}$ (\kms)}\\ Current & 10$^7$ yr w/o m.s. & 10$^7$ yr with m.s. & Current & 10$^7$ yr w/o m.s. & 10$^7$ yr with m.s. & 10$^7$ yr ``plausible''\\ (1) & (2) & (3) & (4) & (5) & (6) & (7)\\ \hline 5.07 $\pm$ 0.07 & 5.15 $\pm$ 0.10 & 5.37 $\pm$ 0.10 & 11.4 $\pm$ 0.7 & 13.7 $\pm$ 0.7 & 20.0 $\pm$ 0.7 & 17.1 $\pm$ 0.8 \\ \hline \end{tabular} \caption{Dynamical properties of the star cluster. Columns (1): Logarithm of the adopted current cluster mass. (2-3): Logarithm of the adopted cluster mass at an age of 10$^7$ yr without(with) the inclusion of initial mass segregation. (4): Current cluster escape velocity at the core radius. (5-6): Cluster escape velocity at the core radius at an age of 10$^7$ yr without(with) the inclusion of initial mass segregation. (8) ``Plausible'' cluster escape velocity at an age of 10$^7$ yr.} \label{t:dynamics} \end{center} \end{table*} We estimated the cluster mass and escape velocity as a function of time, going back to an age of 10 Myr, after the cluster has survived the era of violent relaxation and when the most massive stars of the first generation, proposed to be candidate polluters in literature (i.e., FRMS and massive binary stars), are expected to start losing significant amounts of mass through slow winds. The current mass of NGC~1856 was determined from its integrated-light \emph{V}-band magnitude listed in Table~\ref{t:parameters}. We determined the aperture correction for this magnitude from the best-fit King model derived in Sect.\ \ref{s:kingmodel} by calculating the fraction of total cluster light encompassed by the measurement aperture. After correcting the integrated-light \emph{V} magnitude for the missing cluster light beyond the measurement aperture, we calculated the total cluster mass adopting the values of $A_V, \mbox{[Z/H]}$, and age listed in Table~\ref{t:parameters}. This was done by interpolation between the ${\cal{M}}/L_V$ values in the SSP model tables of \citet{bc03}, assuming a \citet{salp55} initial mass function. The latter models were recently found to provide the best fit (among popular SSP models) to observed integrated-light photometry of LMC clusters with ages and metallicities measured from CMDs and spectroscopy of individual stars in the 0.2\,--\,1 Gyr age range \citep{pess+08}. We calculated the dynamical evolution of the star cluster following the prescriptions of G14. Briefly, the evolution of cluster mass and radius was evaluated with and without initial mass segregation, given the fundamental role that the latter property plays in terms of the early evolution of the cluster's expansion and mass loss rate \citep[e.g.,][]{mack+08b,vesp+09}. For the case of a model cluster with initial mass segregation, we adopted the results of the simulation called SG-R1 in \citet{derc+08}, which involves a tidally limited model cluster that features a level of initial mass segregation of $r_e/r_{e,>1}$ = 1.5, where $r_{e,>1}$ is the effective radius of the cluster for stars with ${\cal{M}} >$ 1 M$_{\odot}$ (see G14 for a detailed description of the reasons for this choice). Table~\ref{t:dynamics} lists the derived masses and escape velocities at an age of 10 Myr, obtained with and without initial mass segregation. Escape velocities are calculated from the reduced gravitational potential $V_{\rm esc} (r,t) = (2\Phi_{\rm tid} (t) - 2\Phi (r,t))^{1/2}$, at the core radius. Here $\Phi_{\rm tid}$ is the potential at the tidal (truncation) radius of the cluster. The choice to calculate the escape velocity at the cluster core radius is related to the prediction of the ``in situ'' scenario, where the second-generation stars are formed in the innermost region of the cluster \citep{derc+08}. For convenience, we define $V_{\rm esc, 7} (r) \equiv V_{\rm esc}\,(r, t = 10^7 {\rm yr})$, and refer to it as ``early escape velocity''. To estimate ``plausible'' values for $V_{\rm esc,\,7}$ we used a procedure that involves various results from the compilation of Magellanic Cloud star cluster properties and N-body simulations by \citet{mack+08b}. Briefly, they showed that the maximum core radius seen among a large sample of Magellanic Cloud star clusters increases approximately linearly with log(age) up to an age of $\sim$ 1.5 Gyr, namely from $\simeq$ 2.0 pc at $\simeq$ 10 Myr to $\simeq$ 5.5 pc at $\simeq$ 1.5 Gyr. Conversely, the {\it minimum} core radius is $\sim$ 1.5 pc throughout the age range 10 Myr\,--\,2 Gyr. Using N-body modeling \citet{mack+08b} showed that this behavior is consistent with adiabatic expansion of the cluster core in clusters with different levels of initial mass segregation, in that clusters with the highest level of mass segregation experience the strongest core expansion (see G14 for a more detailed discussion). Under this assumption, we derive that the ``plausible'' early escape velocity for NGC~1856 is $V_{\rm esc} = 17.1 \pm 0.8$ \kms, high enough to retain material shed by the slow winds of the polluters stars. Figure~\ref{f:escvel} shows the escape velocity of NGC~1856 as a function of age, derived based on the assumed level of mass segregation presented above. The critical escape velocity range of 12\,--\,15 \kms\ derived by C14 and G14 is depicted as the light grey region in Figure~\ref{f:escvel}. The region below 12 \kms, representing, as stated above, the velocity range in which eMSTOs are not observed in intermediate-age star clusters (C14), is shown in dark grey. We note that $V_{\rm esc}$ for NGC~1856 is $\ga 14-15$ \kms\ out to an age of $\approx$\,100 Myr. We recall that this is equivalent to the lifetime of stars of $\approx 5\;M_{\odot}$ (i.e., IM-AGB stars, see \citealt{vendan09}) so that the slow winds of massive binary stars and IM-AGB stars of the first generation should have produced significant amounts of ``polluted'' material within that time. As mentioned above, this situation is similar to that shared among eMSTO clusters of intermediate age in the Magellanic Clouds (G14). It thus seems plausible that a significant fraction of that material may have been retained within the potential well of NGC~1856, allowing second-generation stars to be formed. Finally, we note that the mass and escape velocities of NGC~1856 were likely high enough for the cluster to be able to accrete a significant amount of ``pristine'' gas from its surroundings in the first several tens of Myr after its birth. This would have constituted an additional source of gas to form second-generation stars (\citealt{conspe11}; G14; but see also \citealt{basstra14}). \begin{figure} \includegraphics[width=0.8\columnwidth]{fig14.eps} \caption{Escape velocity $V_{\rm esc}$ as a function of time. The light grey region represents the critical range of $V_{\rm esc}$ mentioned in Sect.\ \ref{s:dynamics}, i.e., 12\,--\,15 \kms. The region below 12 \kms, in which $V_{\rm esc}$ is thought to be too low to permit retention of material shed by the first stellar generation, is shown in dark grey. $\pm 1 \sigma$ errors of $V_{\rm esc}$ are shown by dashed lines.} \label{f:escvel} \end{figure} \section{Summary and Conclusion} \label{s:conclusion} We presented results obtained from a study of new, deep HST/WFC3 images of the young (age $\simeq$ 300 Myr) massive star cluster NGC~1856 in the LMC. After correction for differential reddening, we compared the CMD of the cluster with Monte Carlo simulations, both of one ``best-fit'' SSP and of two SSPs of different ages, in order to investigate the MSTO morphology and to quantify the intrinsic width of the MSTO region of the cluster. We studied the physical and dynamical properties of the cluster, deriving its radial surface number density distribution and determining the evolution of cluster mass and escape velocity from an age of 10 Myr to 1 Gyr, considering the possible effects of initial mass segregation. The main results of the paper can be summarized as follows: \begin{itemize} \item NGC~1856 shows a broad MSTO region whose width can not be explained by photometric uncertainties, LMC field star contamination, or differential reddening effects. Comparison with Monte Carlo simulations of a SSP shows that the observed pseudo-age distribution is significantly wider than that derived from a single-age simulation. Conversely, combining two SSPs with different ages, we obtain a set of pseudo-age distributions that reproduce the observed one quite well. By considering the luminosity function of the RC feature, we conclude that a best fit is achieved with a combination of SSPs with ages of 300 Myr and 380 Myr and a mass fraction for the younger component of $\approx$\,55\%. The observed pseudo-age distribution shows two distinct peaks with a similar maximum value and an uniform decline towards younger and older ages, with respect to the peaks. However, the small separation in age between the two peaks prevents us to conclude whether the morphology of the MSTO can be better explained with a smooth spread in age or by two discrete bursts of star formation. These results do not agree with the conclusions of \citet{bassil13} who conclude that the CMD of NGC~1856 is consistent with a single age to within 35 Myr. We expect that this difference is due to the fact that our data are significantly deeper, namely $\sim$ 6 mag. Consequently, we suggest that the arguments of \citet{bassil13} against the ``age spread'' scenario should be considered with caution. \item We use $V$, $I$, and H$\alpha$ images to select and study candidate (``putative'') pre-MS stars in NGC~1856. The numbers of ``putative'' pre-MS stars in the cluster field and the control (background) field are found to be similar, suggesting that the detections can be considered spurious and/or associated to residual noise. This indicates that star formation is not \emph{currently} ongoing in the cluster. \item The ``stellar rotation'' scenario for the nature of the eMSTO phenomenon has been tested by evaluating rotating and non-rotating isochrones of the same age with the DRC CMD. Overall, a reasonable range of rotation velocities seems to be able to reproduce the MSTO properties quite well, \emph{albeit only if the rotational mixing efficiency is significantly higher than typically assumed values}. However, several properties of MSTOs and RCs among eMSTO star clusters in the age range of 1\,--\,2 Gyr are inconsistent with such high rotational mixing efficiencies. This seems to indicate that a range of stellar ages provides a more likely explanation of the eMSTO phenomenon (including the wide MSTO of NGC~1856) than does a range of stellar rotation velocities, altough, with the current data alone, the latter hypothesis cannot formally be discarded. In this context, new stellar rotation measurements in young and intermediate-age star clusters, combined with new models of theoretical tracks and isochrones for rotating stars will be of fundamental importance to address the role of stellar rotation in the eMSTO phenomenon. \item The dynamical properties of NGC 1856 derived from our data suggest that the cluster has an early escape velocity of $\approx$\,17 \kms, high enough to permit the retention of material shed by the slow winds of polluter stars (IM-AGB stars and massive binary stars). The cluster escape velocity remains above the threshold value of 14\,--\,15 \kms\ for $\approx$ 80\,--\,100 Myr, long enough for the slow winds of IM-AGB stars to have ejected their envelopes. This material would likely have been available for second-generation star formation which could have caused the wide MSTO. Furthermore, these early escape velocities are consistent with observed wind velocities of ejecta of IM-AGB stars and with those seen in massive binary star systems. \end{itemize} \section*{Acknowledgments} Support for this project was provided by NASA through grant HST-GO-13011 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5--26555. We made significant use of the SAO/NASA Astrophysics Data System during this project. THP acknowledges support through FONDECYT Regular Project Grant No. 1121005 and BASAL Center for Astrophysics and Associated Technologies (PFB-06).
{ "timestamp": "2015-04-14T02:17:50", "yymm": "1504", "arxiv_id": "1504.03299", "language": "en", "url": "https://arxiv.org/abs/1504.03299" }
\section{Introduction} Recent years have seen progress in the understanding of the fundamental notions of computation. Classical algorithms were axiomatized by Gurevich~\cite{ASM-Theorem-Gurevich}, who also showed that a simple, generic model of computation, called \emph{abstract state machines (ASMs)}, suffices to emulate state-for-state and step-for-step any ordinary (non-interactive, sequential) algorithm. In~\cite{Exact}, it was shown that the emulation can be made precise in that it does not access locations in states that the original algorithm does not. In \cite{CT_ASM,BSL}, it was shown that any algorithm that satisfies an additional effectiveness axiom---regardless of its program constructs and data structures---can be simulated by what we call an \emph{effective ASM (EASM)}, which is an ASM whose atomic actions are effective constructor and destructor operations. Moreover, such effective algorithms over arbitrary domains can be efficiently simulated by a random access machine (RAM), as shown in \cite{ECTT,invariance}. In this way, the gap between the informal and formal notions of computation has been reduced, and the classical Church-Turing thesis---that Turing machines entail all manner of effective computation---and its extended version---claiming that ``reasonable'' effective models have comparable computational complexity---both sit on firmer foundations. At the same time, von Neumann's cellular model~\cite{cell} has been enhanced to encompass more flexible forms of computation than were covered by the original model. In particular, the topology of cells can be allowed to change during the evolution of an interconnected device, in what has been called ``causal graph dynamics''~\cite{causal}. Cellular automata have the advantage of better reflecting the laws of physics that a real computing machine must comply with. They respect the ``homogeneity'' of space-time in that processor cells and memory cells are uniform in nature, in contradistinction with Turing machines, RAMs, or ASMs, whose controls are centralized. This cellular approach can help us better understand under what conditions the physical Church-Turing thesis~\cite{Gandy}, stating that no physically plausible device can compute more functions than a Turing machine can, might hold~\cite{PCTT}. In what follows, we show that any algorithm can be simulated by a dynamic cellular automaton, thus showing that a homogenous physically-plausible model can implement all algorithmic computations. We begin, in the next section, with basic information about cellular automata and abstract state machines. It is followed by a description of the simulation and a brief discussion. \section{Background} \subsection{Cellular Automata} Classical cellular automata are defined as a static tessellate of cells. Initially, each cell is in one of a set of predefined internal states, conventionally identified with colors, {of which we will have only finitely many.} Sitting somewhere to the side is a clock, and every time it ticks, the colors of the cells change. Each cell looks at the colors of its nearby cells and at its own color and then applies a \emph{transition} rule, specified in advance, to determine the new color it takes on for the next clock tick. Transitions are simple finite-state automata rules. In this model, all cells change at the same time and their transition rules are all the same. The underlying topology may take different shapes and have different dimensions. The definition of neighborhood may vary from one automaton to another. On a two-dimensional grid, the neighbors may be the four cells in the cardinal directions (called the ``von Neumann neighborhood''), or it can include the for corner cells (the ``Moore neighborhood''), or perhaps a block or diamond of larger size. In principle, any fixed group of cells of any arbitrary shape can be looked out to determine which transition applies. A \emph{sequential} automaton is the special case when one cell is active and only that cell can perform a transition step. In addition, the transition marks one of the active cell's neighbors (or itself) to be active for the following step. To model reality better, one should consider the possibility that the connections between cells also evolve over time. For \emph{dynamic cellular automata}~\cite{causal}, cells are organized in a directed graph. Similar to the above classical case, each cell is colored in one of a palette of predefined colors. Edges also have colors, to indicate the type of connection between cells, adding flexibility. Transitions are governed by global clock ticks. In the sequential case one cell is marked active. This cell inspects its neighborhood and applies a transition rule. The difference between the static and dynamic cases is that in the static case, the transition is governed by different colorings of the cells in a fixed neighborhood, while in the dynamic case, it is governed by a set of different neighborhood patterns, each with various colored cells connected by colored edges. In both cases, a transition rule defines a transformation of the cells in a detected pattern: in the static case colors change, while in the dynamic case, connections may also change and new cells may be added. With each clock tick, the active cell inspects its neighborhood to detect one of those predefined patterns. Then the transition rule is applied according to the detected pattern. (Cells never die in this model, but they may become disconnected from every other cell.) Examples of such transitions are shown in Figure~\ref{examples}. \begin{figure} \[\includegraphics[scale = 0.4]{pics/transition_example.pdf}\] \caption{Examples of transition rules.}\label{examples} \end{figure} \begin{ignore} (We only restrict that cells are not removed. This restriction is redundant for sequential case we describe here, but is important for future generalization for parallel, distributive and continuous cases. We introduce it in sequential part as well to allow a sequential case to be a specific case of extended model. A node may actually be removed by making it inaccessible from other nodes, by removing incoming edges.) \end{ignore} Note that there might be several transition patterns in the neighborhood of an active cell. For example, given that an active cell detects a pattern of the second type in the example in the figure, it might choose to act according to the first rule instead. If a neighborhood of the active cell contains pattern $p$, while some subset of its cells also constitute a transition pattern $p'$, we can demand that no transition be applied using $p'$. We call this restriction \emph{maximality}. (Intuition may be purchased from the following scenario. Assume that your neighbors make a lot of noise from time to time. If at a given time point you have only one noisy neighbor, you might decide to stay put in peace. But if there are two of them, you would want to call the police. What's worse, if you have three or more rowdy neighbors, you might also need an ambulance. If there is some noise around, a transition might be applied erroneously, as if there were only one noisy neighbor, which is not the natural intent.) So, we want the more specific rules to take precedence over the less constrained ones.% \footnote{{An alternative would be to supply a (partial) order according to which transition rules are tried.}} In our example, if the second pattern is applicable, then the first one is not applied. All the same, patterns may overlap, so transitions remain non-deterministic. For example, consider the following neighborhood: \[\includegraphics[scale = 0.4]{pics/non-deterministic_transition_example.pdf}\] In general, these choices can affect the final result, but the simulation we describe has the same outcome regardless. \subsection{Classical Algorithms} \label{sec:alg} Gurevich~\cite{ASM-Theorem-Gurevich} has axiomatized generic algorithms as follows (see also the exposition in~\cite{Generic}): \begin{postulate}[Axiom of Algorithmicity~\cite{ASM-Theorem-Gurevich}]\ \begin{itemize} \item[\rm (I)] An \emph{algorithm} is posited to be a state-transition system comprising a set (or class) of \emph{states} and a partial \emph{transition} function from state to \emph{next} state. \item[\rm (II)] States may be seen as (first-order) logical structures over some (finite) vocabulary, closed under isomorphism (of structures). Transitions preserve the domain (universe) of states, and, furthermore, isomorphic states are either both terminal (have no transition) or else their next states are isomorphic (via the same isomorphism). \item[\rm (III)] Transitions are governed by a finite, input-independent set of \emph{critical} (ground) terms over the vocabulary such that, whenever two states assign the same values to those terms, either both are terminal or else whatever changes (in the interpretations of operators) a transition makes to one, it also makes to the other. \end{itemize} \end{postulate} States being structures, they include, not only assignments for programming ``variables'', but also the ``graph'' of the operations that the algorithm can apply. We may view a state over ${F}$ with domain (universe) $D$ as storing a (presumably infinite) set of \emph{location-value} pairs $f(a_{1},\ldots,a_{n})\mapsto b$, for all $f\in{F}$ and $a_{1},\ldots,a_{n}\in D$ and some $b\in D$. So, by ``changes'', we mean $\tau(x)\setminus x$, where $\tau$ is the transition function, which gives the set of changed location-value pairs. We treat relations as truth-valued functions and consider all states to be potential input states. As in this study we are only interested in classical deterministic algorithms, transitions are functional. We use the adjective {}``classical'' to clarify that, in the current study, we are leaving aside new-fangled forms of algorithm, such as probabilistic, parallel, or interactive algorithms. For detailed support of this axiomatic characterization of algorithms and relevant citations from the founders of computability theory, see~\cite{ASM-Theorem-Gurevich,BSL}. Item I is meant to exclude ``hypercomputational'' formalisms, such as~\cite{Gold,Putnam}, in which the result of a computation---or the continuation of a computation---may depend on (the limit of) an infinite sequence of preceding (finite or infinitesimal) steps. Likewise, processes in which states evolve continuously (as in analog processes, like the position of a bouncing ball), rather than discretely, are eschewed. Naturally, in this work, we are only interested in deterministic algorithms, which is why transitions are a partial function on states. States as structures make it possible to consider all data structures sans encodings. In this sense, algorithms are generic. Item II precludes states with infinitary operations, like the supremum of infinitely many objects, which would not make sense from an algorithmic point of view. The structures are ``first-order'' in syntax, though domains may include sequences, or sets, or other higher-order objects, in which case, the state would provide operations for dealing with those objects. The identification of states with structures is justified by the vast experience of mathematicians and scientists who have faithfully and transparently presented every kind of static mathematical or scientific reality as a logical structure. In restricting structures to be ``first-order'', we are limiting the \emph{syntax} to be first-order. Closure under isomorphism ensures that the algorithm can operate on the chosen level of abstraction and that states' internal representation of data is invisible and immaterial to the algorithm. The same algorithm will work equally with different representations, as for example, testing primality of numbers whether given as decimal digits, as Roman numerals, or in Neolithic tally notation. This means that the behavior of an \textit{algorithm}, in contradistinction with its ``implementation'' as a C program---cannot, for example, depend on the memory address of some variable. If an algorithm does depend on such matters, then its full description must also include specifics of memory allocation. The intuition behind the third item in the postulate is that it must be possible to describe the effect of transitions in terms of the information in the current state. Unless all states undergo the same updates unconditionally, an algorithm must explore one or more values at some accessible locations in the current state before determining how to proceed. The only means that an algorithm has with which to reference locations is via terms, since the values themselves are abstract entities. If every referenced location has the same value in two states, then the behavior of the algorithm must be the same for both of those states. This postulate---with its fixed, finite set of critical terms---precludes programs of infinite size (like an infinite table lookup) or which are input-dependent. \subsection{Generic Programs} It has been shown in~\cite{ASM-Theorem-Gurevich} that every algorithm, in the sense formalized above, can be emulated step-by step, state-by-state by a particular form of algorithm: \begin{definition}[ASM Program \cite{Lipari}]\label{def:asm} \emph{ASM programs}, over some vocabulary $F$, are composed of assignments and conditionals. \begin{itemize} \item A generalized \emph{assignment} statement $$f(s^{1},\ldots,s^\ell):=u$$ involves terms $u,s^{1},\ldots,s^\ell$ over ${F}$. Applying it to a state $X$ changes the interpretation that the state gives to $f$ at the point $(s_{X}^{1},\ldots,t_{X}^\ell)$ to be $u_{X}$, where $t_X$ denotes the value that state $X$ gives to term $t$. \begin{ignore} The result is an algebra $X'$ such that $t_{X'}=t[f(s^{1},\ldots,s^\ell)\mapsto u]_{X}$ for any term $t$ over ${F}$, where $t[s\mapsto u]$ denotes the term obtained from $t$ by simultaneous replacement of all occurrences of the subterm $s$ in $t$ by $u$. \end{ignore} \item Program statements may be prefaced by a \emph{conditional} test, $$\mbox{\bf if}\; c\;\mbox{\bf then}\; p \mbox{\rm ~~~~or~~~~} \mbox{\bf if}\; c\;\mbox{\bf then}\; p\;\mbox{\bf else}\; q$$ where $c$ is a Boolean combination of equalities between terms. Only those branches of conditional statements whose condition evaluates to \textsc{true}{} are executed. \item {As a matter of convenience, a program statement may also be prefaced by $\mbox{\bf let}\; x=t \mbox{~\bf in~}\dots$, which has the same effect as if all occurrences of $x$ in the statement were replaced by $t$.} \item Furthermore, statements may be composed in \emph{parallel:} $A~\|~B$. \item The program, as such, defines a single transition, which is executed repeatedly, as a unit, until no assignments are enabled by the conditions preceding them. When no assignments are enabled, then there is no next state. \end{itemize} \end{definition} By~\cite{ASM-Theorem-Gurevich}, each transition of a classical algorithm can be described by a bounded number of actions of comparisons and assignments. All models of effective, sequential computation (including Turing machines, counter machines, pointer machines, etc.) satisfy the above algorithmicity postulate, and can therefore be programmed as ASMs. By the same token, idealized algorithms for computing with real numbers, or for geometric constructions with compass and straightedge (see~\cite{Reisig04} for examples of the latter) can also be precisely described by ASM programs. See~\cite{Generic}. ASMs work over arbitrary domains, but in this work, we use sets of atoms. A \emph{(fair) unordered domain} consists of a finite set $A$ of atoms $a_1,\ldots, a_n$, including the empty set $\varnothing$, and any set obtained from atoms by a finite number of applications of the following set-theoretical operations: \begin{itemize} \item $\{\}$, the \emph{singleton} former; \item $\cup$, the \emph{union} of two sets. \end{itemize} An algorithm is also supported with oracle access to \begin{itemize} \item a binary Boolean \emph{membership} predicate $\in$ with its usual set-theoretic interpretation; \item a unary \emph{choice} operation $\epsilon} % {\mbox{\bf choose}\;$ which returns an (arbitrary) element from a given non-empty set, which is then used in a program statement. {The $\mbox{\bf let}\;$ construct may be used to ensure that the same choice is made in more than one place.} \end{itemize} Even though computational paths might differ from run to run, an algorithm must commit to the same output despite the choices it makes. This class of choice-based algorithms over unordered domains was introduced in \cite{BGS-CPT}, where it was proved that a matching problem for graphs can be computed over these unordered structures, but not if choice is replaced by unbounded parallelism. Later in \cite{BGS}, it was shown that supporting structures with counting resolves this issue. \section{Simulating Algorithms with Cellular Automata} We allow only finitely-describable topologies for cellular automata, and we bound their dynamics, requiring that its transition relation should also be describable by a finite number of patterns. Our main result is that cellular automata with bounded dynamics can simulate the behavior of any classical algorithm over any unordered domain. We first show how the graph structures of cellular automata can represent the unordered domains of algorithms. Then we show how a transition may simulate manipulations of domain elements. \subsection{Bounded Dynamics} Suppose some domain is constructed over two atoms \textsf{a} and \textsf{b}. The classical tree representation of an element $\{\{\textsf{a,b}\},\{\{\textsf{a}\}\},\{\textsf{a}\}\}$ looks like this: \[\includegraphics[scale = 0.4]{pics/unordered-tree.pdf}\] To avoid obvious reduplication of data, we should use edges pointing to shared locations. This representation is called a \emph{term-graph}~\cite{TermGraph}, and our sample element will look like this: \[\includegraphics[scale = 0.4]{pics/unordered-term-graph-eps-converted-to.pdf}\] Now, assume that we want to represent two distinct elements $\{\{\textsf{a,b}\},\{\{\textsf{a}\}\},\{\textsf{a}\}\}$ and $\{\{\textsf{a,b}\},\{\textsf{b}\}\}$. To avoid reduplication here, we again use pointers to locations shared by both and call the resulting structure a \emph{tangle}~\cite{ECTT}. In our example, the tangle will look as follows: \[\includegraphics[scale = 0.4]{pics/unordered-tangle-eps-converted-to.pdf}\] Next, we need to represent the values of functions. We use a slight modification: For each $k$ such that an ASM has a non-constructor function of arity $k$, we append to the tangle an ordered $k$-tuple. Assume that our vocabulary has a binary function $g(\cdot,\cdot)$, and assume our ASM has critical terms $t$ and $p$. Suppose we need to represent state $X$ with values $t=\{\{\textsf{a,b}\},\{\{\textsf{a}\}\},\{\textsf{a}\}\}$, $p=\{\{\textsf{a,b}\},\{\textsf{b}\}\}$, and $g(t,p)=\{\textsf{a,b}\}$. For convenience, we add a focus node called \emph{Criticals}. Edges outgoing from this node point to the values of critical terms and are labeled appropriately. Our modified tangle will look as follows: \[\includegraphics[scale = 0.4]{pics/unordered-tangle-pairs-eps-converted-to.pdf}\] {With tangles, we do not have duplicate nodes, that is, no two distinct nodes have the same subtrees, since every domain element is represented by at most one node.} As the last step, we reverse all tangle edges, except for those representing critical terms values, to allow directed access from nodes to parents: \[\includegraphics[scale = 0.4]{pics/unordered-tangle-reversed-eps-converted-to.pdf}\] {(This step is not necessary, but will have the arrows going in the direction of most of the movements.)} Note that both in-degree and out-degree are unbounded. {The node labeled \textsf{Criticals} will serve as the active one in the following sequential simulation.} \subsection{The Simulation} We base the proof of our main result, on the fact that the evolution of any algorithm may be captured by an ASM program. We show that given a domain simulation as above, for each mechanical rule in a program, there is a set of transition rules of a cellular automaton that emulates it. And since each algorithmic transition is described by a finite rule, we will only need finitely many automaton rules to simulate it. \begin{lemma} Cellular automata simulate the application of pairing $\langle\cdot,\cdot\rangle$ in constant time. \end{lemma} \begin{proof} Assume we want to apply a rule $p:=\langle t,t' \rangle$, where $t$, $t'$, and $p$ are critical terms. The transition rule for the cellular automaton would be as follows: \[\includegraphics[scale = 0.4]{pics/unordered-transition-pair.pdf}\] {We need the second rule to cover the case when the pair already exists; the first rule is more general and will only fire if the second one is inapplicable.} {(The annotations \textsf{X} and \textsf{Y} are not labels; they are used to indicate which nodes on the right of a pattern correspond to which nodes on the left. For convenience, colorless cells like these match a node of any color; skirting formality, this way we need not unnecessarily multiply patterns to cover every possible color combination.)} \end{proof} \begin{lemma} Cellular automata simulate the application of choice $\epsilon} % {\mbox{\bf choose}\;$ in constant time.\end{lemma} \begin{proof} This operation is used in statements of the form $\mbox{\bf let}\; x=\epsilon} % {\mbox{\bf choose}\;(t) \mbox{~\bf in~} A$. A straightforward definition of the appropriate transition for a cellular automaton will of necessity be nondeterministic, like the $\epsilon} % {\mbox{\bf choose}\;$ operation itself. The pattern chooses the element of $t$ for each of its uses in statement $A$, like this: \[\includegraphics[scale = 0.4]{pics/unordered-transition-choose.pdf}\vspace*{-5mm}\] \end{proof} \begin{lemma} Cellular automata simulate the application of conditional tests in constant time. \end{lemma} \begin{proof} Each transition of an ASM performs a bounded number of actions of two types: Boolean statements and assignments. Since their number is bounded by the algorithm, it is enough for us to describe the simulation of one operation of each type. We have two types of Boolean conditions, inclusions and comparisons: \begin{itemize} \item Boolean membership $\in$ is used only as a condition. A statement \[\mbox{\bf if}\; t\in p\;\mbox{\bf then}\; t:=p\] for example, is expressed as follows: \[\includegraphics[scale = 0.4]{pics/unordered-transition-in.pdf}\] \item Boolean comparison is used as a condition. For example, an ASM described by a rule \[\mbox{\bf if}\; t\neq p \;\mbox{\bf then}\; t:=f(t,p)\] would be simulated by a cellular automaton with the following transitions {to cover all cases (there is a node for $f(t,p)$; there is a node for the pair $\langle t, p\rangle$ but not the value; neither)}: \[\includegraphics[scale = 0.4]{pics/unordered-transition-functions.pdf}\] \end{itemize} \end{proof} \begin{lemma} Cellular automata simulate the application of singleton formation $\{\cdot \}$ in a linear number of steps. \end{lemma} \begin{proof} Assume that an algorithm applies a rule $p:=\{t\}$, where $t$ and $p$ are critical terms. We simulate the singleton operation in three steps. First we create a node for the singleton and mark it \textsf{singleton suggestion}. We also choose another node, if there is one, and mark it \textsf{singleton candidate}: \[\includegraphics[scale = 0.4]{pics/unordered-transition-singleton-1.pdf}\] Then we check if there there already exists a node for that singleton, and if so, we discard the new singleton node created in previous step. To check, we go over all neighbors of \textsf{X} and check each of them in turn. If the requisite singleton node is found, we point to it as the singleton (with a $p$-marked arrow) and disconnect the newly created node from the tangle. If the tested node is not the desired one, we mark it with a cable that states that the node was tested and move on to the next candidate. When there are no candidates we mark the newly created node as the desired singleton. \[\includegraphics[scale = 0.4]{pics/unordered-transition-singleton-2.pdf}\] As the last step, we remove the marks from the nodes and then remove the edge \textsf{singleton found}: \[\includegraphics[scale = 0.4]{pics/unordered-transition-singleton-3.pdf}\] As always, we use the rule which forces the most constrained pattern to be applied. {The cost is linear, since we need to check every set of which $t$ is a member to see if it is a singleton.} \end{proof} \begin{lemma}\label{th:union_ops} Cellular automata can simulate applications of the union of two sets with a quadratic number of operations (relative to the number of elements in the sets). \end{lemma} \begin{proof} Suppose we want to simulate the operation $t:=s\cup p$, where $t,p,s$ are critical terms, with $s$ pointing to a node indicated by $X$ and $p$ pointing to $Y$. The simulation will proceed in several stages; the correct order of those steps will be assured by the maximality restriction on transitions. Similar rules should be added to the transition for each possible node coloring. Recall that we want only one instance of each value. In the beginning, we have to find whether we already have a node representing union of $s$ and $p$. For this we will go over all accessible nodes from (any) one of the elements that belong in the union. We will show that verifying one node can be done in linear time, so the overall procedure runs in quadratic time. \begin{enumerate} \item Assume we want to check whether the element whose root is pointed to by $u$ is the union of $s$ and $p$. We start by creating a special edge to this element. This edge, labeled \textsc{check}, will serve as a lock indicating that we are in the midst of the verification process and will not allow other transitions to get involved in the middle. \[\includegraphics[scale = 0.4]{pics/unordered-transition-check-union1.pdf}\] \item Next, we detect the elements appearing in all of $u$, $s$ and $p$. Edges from those elements are colored with a special color: \[\includegraphics[scale = 0.4]{pics/unordered-transition-check-union-v2-2.pdf}\] The same is done (in parallel) with $t$ and $p$. \item Next, we detect common elements of $u$ and $s$ but not in $p$. Pointers from detected elements are again marked with the special color: \[\includegraphics[scale = 0.4]{pics/unordered-transition-check-union-v2-3.pdf}\] The same is done with $u$ and $p$. \item If $s$, $p$, and $u$ are all empty, then $u$ is indeed the union of $s$ and $p$. Mark it as such. Otherwise, $u$ is not the union node: \[\includegraphics[scale = 0.4]{pics/unordered-transition-check-union-v2-4.pdf}\] \item Once the status of $u$ is clear, we remove the marks from edges: \[\includegraphics[scale = 0.4]{pics/unordered-transition-check-union-v2-5.pdf}\] {Each element identified to not be the union is marked with a special color for the duration of the search so as not to re-check it, similar to the singleton case.} \end{enumerate} {Once all the possible elements have been checked, and no union found, we are ready to create the union.} \begin{enumerate} \item {First, we create a new node which will eventually hold the union:} \[\includegraphics[scale = 0.4]{pics/unordered-transition-check-union-v2-6.pdf}\] {A special marked edge tells the automaton that it is the process of creating a union.} \item We start with copying to $u$ the elements that are common to $s$ and $p$, {and mark the edges}: \[\includegraphics[scale = 0.4]{pics/unordered-transition-union-v2-2.pdf}\] \item In a similar manner, we copy elements that are present in one set only: \[\includegraphics[scale = 0.4]{pics/unordered-transition-union-v2-3.pdf}\] \item Once all edges are marked, we know that the desired node is created and we mark it appropriately: \[\includegraphics[scale = 0.4]{pics/unordered-transition-union-v2-4.pdf}\] This rule applies only when no element transfers remain. \item {Once the \textsc{is union} mark appears, all that remains is to clean the marks left en route:} \[\includegraphics[scale = 0.4]{pics/unordered-transition-union-v2-5.pdf}\] \item As soon as the union is ready and its neighborhood is clean, we may remove the lock: \[\includegraphics[scale = 0.4]{pics/unordered-transition-union-v2-6.pdf}\] \end{enumerate} Note again that the maximality restriction on transitions ensures that all the above steps are applied in the prescribed order. \begin{ignore} To prevent this, we again take advantage the maximality restriction: we combine each inclusion rule with each foreign rule of the transition, forcing the automaton to keep on going with the inclusion, before returning the other transitions. We then replace the inclusion rules with the obtained set of rules. For example, assume we have the following rule: \[\includegraphics[scale = 0.4]{pics/unordered-transition-example_union.pdf}\] Take as an example one of the union rules: \[\includegraphics[scale = 0.4]{pics/unordered-transition-union2.pdf}\] We want to ensure that this rule is applied before the other rule. So we combine them together in one transition rule: \[\includegraphics[scale = 0.4]{pics/unordered-transition-union5.pdf}\] We do so for all rules in the transition, and replace the inclusion rules with the outcome. \end{ignore} \end{proof} Every classical algorithm is emulated step-by-step, state-by-state by an ASM consisting of a fixed number of comparisons and assignments~\cite{ASM-Theorem-Gurevich}. That fact, along with the previous lemmata, is what is needed to achieve our goal: \begin{theorem}[Main]\label{th:main} Cellular automata with {bounded dynamics (i.e.\@ all the nodes in a pattern are within a bounded distance of the focus) and without loops (there are no directed cycles within patterns)} can simulate the performance of any classical algorithm over an unordered domain with {quadratic} multiplicand overhead. \end{theorem} \begin{proof} {We have to ensure that, once the automaton starts to simulate the singleton or union operation, it cannot be interrupted by the application of other transition rules. Otherwise, foreign steps could affect the elements of the sets involved in these set operations. This problem can be precluded, for instance, by changing the color of the \textsf{Criticals} node during the simulation of those operations.} {Each step of the original algorithm can only create a bounded number of new sets. Hence the size of the sets involved in any union operation is bounded by the size of the sets in the initial state plus some multiple of the algorithm's steps so far. So the overall overhead caused by unions is quadratic.} \end{proof} \section{Discussion} We have outlined the basic features of dynamic cellular automata and proved that this model is flexible enough to simulate arbitrary computations over unordered domains. It may perhaps be possible to reduce the cost of the simulation. {In particular, allowing negative edges in patterns, for labeled edges that must not appear, can reduce the complexity of the union operation. One may also consider allowing for duplication, which would increase the cost of comparisons but reduce the cost of union.} {Another question is at what added expense could one bound the degree of nodes.} One task facing us now is to describe a natural extension to the parallel case. In this case, at each step, all cells are active and can all affect their neighborhoods. \subsection*{Acknowledgement} This work benefited greatly from long discussions with Gilles Dowek. {We thank the reviewer for a careful reading.} \bibliographystyle{eptcs}
{ "timestamp": "2015-04-14T02:09:38", "yymm": "1504", "arxiv_id": "1504.03013", "language": "en", "url": "https://arxiv.org/abs/1504.03013" }
\section{Introduction} The method of least squares linear regression (straight line fitting) has a very long history: it was invented in its simplest form by C.F.~Gau\ss, but is still one of the most widespread and powerful approaches in data analysis. It may be used as a stand-alone tool to detect linear trends, or be incorporated into more complex analysis procedures, like Detrended Fluctuation Analysis proposed in \cite{Peng1994}, whose first step requires subtraction of linear trends from subpartitions of data. The standard variant of the method assumes the linear relation between the dependent variable $y$ and the independent one $x$, and the existence of a random impacts on the outcomes of single measurements, represented by the noise $\xi$, so that \begin{equation} y_i = ax_i + b + \xi_i, \label{LiRe1} \end{equation} and is aimed onto extracting information about $a$ and $b$ from such noisy data. The standard method works well if the data are ``compact'', i.e. when the corresponding interval on the abscissa is homogeneously sampled and no large ordinate outliers are present. The method is essentially a parametric one and can be regarded as the maximum likelihood approach assuming the Gaussian distribution of independent errors. The challenges of more complicated samples originating from modern problems of experimental and computational physics and related fields have motivated works aimed to improve the accuracy of fits to extremely irregular data, i.e. the ones having outliers on the ordinate and on the abscissa (leverage points), or large errors in locating $x_i$, see \cite{Macdonald1992,Cantrell2008} for the list of modern modifications. For this reason, a number of works discuss the criteria for a detection of this outliers with the following their elimination with respect to a prescribed cut-off level, and the regression of obtained ``cleared'' samples \cite{Rousseeuw2005} or a choice of subintervals, where the influence of outliers could be negligible \cite{Grech2013,Gulich2014}. Another problem arises for the non-independent noises which themselves can show trends \cite{Hu2001}. Even in the case of independent errors the problems arise if the noise possess a heavy-tailed distribution, i.e. generates large outliers. These are quite characteristic for a large variety of process in small nonequilibrium systems, network dynamics, econophysics, etc. \cite{Clauset2009}. Since these distributions may lack even the first moment, their processing, if keeping the principles of the least-square regression untouched, requires very specific methods \cite{Adler,Gather2006} including repeated median regression, the consideration of a nested hierarchy block subdivisions for the analyzed sample, etc. For such cases non-parametric regression methods may be superior to the standard one. In the present work we discuss two such approaches, the quantile regression as pioneered by Koenker and Basset \cite{Koenker1978}, and the scale parameter regression based on the properties of characteristic functions. The methods are non-parametric (i.e. do not assume the specific form of the distribution) and robust (i.e. do not rely on the existence of its moments). Our numerical examples consider linear trend in presence independent errors distributed according to L\'evy stable laws. As a practical example, we consider geophysical data, namely the eastward component of the geomagnetic field measured on a moving Antarctic ice shelf, showing a linear trend from the motion and a combination of small and large scale fluctuations. Here the results of robust scale parameter regression are compared to conventional methods. \section{Linear regression} Before discussing the specific methods, let us shortly review the general idea (or, better, general ideas) of linear regression. Posing the regression problem starts from the assumption that the values of the dependent variable (observable) $y_i$ linearly depend on $x_i$, but are subject to additive noise $\xi_i$, Eq.(\ref{LiRe1}). We are looking for the way of inferring of the parameters $a$ and $b$, delivering the best possible estimates $\hat{a}$ and $\hat{b}$ for these parameters. In the ideal situation (at least in the asymptotic setting when the total number of measurement points $N$ gets large, $N \to \infty$) the method should give $\hat{a}=a$, $\hat{b}=b$. In praxis, this is usually done by the application of the least squares fit. There are different ways to think about the least squares method. First, we can follow the standard line of argumentation pertinent to statistical inference and make a maximal likelihood estimate for the parameters $a$ and $b$ assuming the distribution of $\xi_i$ is Gaussian with zero mean and unknown dispersion, \[ p(\xi_i) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left( - \frac{\xi_i^2}{2 \sigma^2} \right). \] In this case the probability density of a given realization of $\xi_i$ is given by the product of such single-point distributions: \[ p(\xi_1,...,\xi_N) = \prod_{i=1}^N p(\xi_i) = \left(\sqrt{2 \pi} \sigma \right)^{-N} \exp \left( - \frac{\sum_{i=1}^N \xi_i^2}{2 \sigma^2} \right). \] Changing from $\xi_i$ to $y_i$ we get the corresponding density of the experimental outcomes $\{y_i\}$, \[ p(y_1,...,y_N | a,b) = \left(\sqrt{2 \pi} \sigma \right)^{-N} \exp \left( - \frac{\sum_{i=1}^N (y_i - a x_i -b)^2}{2 \sigma^2} \right). \] Considering the log-likelihood of $a$ and $b$ provided the data, \[ L(a,b | \{y_i\}) = \ln p(\{y_i\} | a,b) = const - \frac{\sum_{i=1}^N (y_i - a x_i -b)^2}{2 \sigma^2} \] and maximizing it with respect to $a$ and $b$, we get the least square prescription for finding $a$ and $b$ by minimizing the sum of squared residues \[ R^2=\sum_i \left[y_i-(a x_i+ b)\right]^2 = min. \] Note that this criterion, which essentially assumes the Gaussian prior is of course a parametric one, and therefore not robust. Assuming another distribution, say the Laplace one with \[ p(\xi_i) = \frac{1}{\sigma} \exp \left( - \frac{|\xi_i|}{\sigma} \right) \] will lead to a different criterion, in this case to the minimization problem of \[ R = \sum_i \left|y_i-(a x_i+b)\right|. \] Another approach to the linear regression is a geometric one. As above, the variables $\xi_i$ are assumed to be i.i.d. random variables drawn from a distribution $p_\xi(\xi)$, which we will be assumed continuous, symmetric and monomodal. The coordinates of points $(x_i,\xi_i)$ are mutually independent. The points $(x_i,\xi_i)$ are considered as realizations of points in a two-dimensional \textit{cloud} characterized by the density (joint probability density) $p(x,\xi) = p(x) p(\xi)$. This cloud is mirror-symmetric with respect to $x$ axis. The pairs $(x_i,y_i)$ with $y_i$ depending on $x_i$ are realizations of points of another two-dimensional cloud, which is obtained from the first one by a shift and an affine transformation. The regression aims on the restoration of these transformation parameters $a$ and $b$ so that the cloud with the density $p(x, \xi)$ with $\xi = y -(ax +b)$ indeed has the properties discussed above. One looks for the empirical estimators $\hat{a}$ and $\hat{b}$ of these parameters. If we say that this symmetry presumes the fact that the center of mass of the cloud lays on $x$ axis and then that one of its main axes of inertia coincide with it, we get from the first requirement \[ \sum_i y_i-(ax_i+b) = 0 \] so that $b = N^{-1} \sum_i (y_i-ax_i) = \langle y \rangle - a \langle x \rangle$. Then one notes that the main axes of inertia of the two-dimensional body are such that the moments of inertia with respect to these are extremal, and requires the extremality of \[ I = \sum_i [y_i-(ax_i+b)]^2 = \sum_i [(y_i - \langle y \rangle - a (x_i - \langle x \rangle)]^2 \] (with $I$ being the moment of inertia with respect to the $x$-axis) with respect to $a$ with $b$ defined as before. This gives equations which define $a$ and $b$ from the least square method. However, the mirror-symmetry of the $(x, \xi)$-cloud with respect to $x$-axes can be cast into different other extremality prescriptions or into the statement that half of its mass has to lay above, and half below the axis, which gives (provided $b$ is defined) the robust median regression for $a$. The method should work in this form provided $\langle y \rangle$ and $\langle x \rangle$ do exist. If they do not (i.e. when the distribution of $y$ is broad (outliers) or the distribution of $x$ is broad (leverage points)), the standard problems arise. Note that the median method is sensitive to the centering of the cloud: it will break down if the center of the cloud is at the origin. Another variant of the geometric approach discussed below is based on a different consideration. It aims on finding the estimate for $a$ prior to connecting it to $b$ and is robust both with respect to outliers and to leverage points (which question is not a topic of the present work). Let us define the residues \[ \Delta y_i = (\hat{a}-a)x_i + (\hat{b}-b) + \xi_i, \] and concentrate first on the obtaining of the best estimate $\hat{a}$ for the slope parameter $a$. We note that the parameter $b$ only shifts the distribution of $\Delta y_i$, and only influences the position of $\Delta y_i$, while the parameter $a$ influences the width of $p(\Delta y_i)$. In the case of exact tuning $\hat{a} = a$ this width is given by the one of the distribution of $\xi$; for $\hat{a} \neq a$ the distribution of $\Delta y_i$ (centered on $\hat{b}-b$) is a \textit{convolution} of the distribution of $\xi$ and the one of $(\hat{a}-a)x_i$, which now has a nonzero width. Since the convolution of two distributions is always "broader" than each of them, the minimal width will coincide with the one of the distribution of $\xi$ and achieved for $\hat{a} = a$. In a setting when the width of the distribution is given by its variance, the method again reduces to the least squares approximation: The empirical width is defined as \[ W^2 = \frac{1}{N} \sum_i \Delta y_i^2 \] and is minimized with respect to two free parameters $\hat{a}$ and $\hat{b}$. \section{Width regression} In our approach we use the fact that while the parameter $b$ only shifts the distribution of $\Delta y_i$, and influences the position of $\Delta y_i$, the width of the distribution of $\Delta y_i$ is only influenced by the parameter $a$. In the case of exact tuning $\hat{a} = a$ this width is given by the one of the distribution of $\xi$. Our two regression approaches differ in the point of how this ``width'' of the distribution is defined. As we have already seen, defining the width by a variance of the corresponding distribution (provided it exists) leads to the standard least square prescription; its additional advantage is that the minimization procedure follows by solution of a system of linear algebraic equations. Other definitions of width (for example estimation the first absolute moment of the distribution) lead to nonlinear equations which have to be solved numerically. Both methods estimate width via some absolute moments of the distribution. Both methods do not work for distributions having power-law tails; the first one fails for the ones with diverging second moment, the second one for the distributions with diverging first moment (like Cauchy distribution). Moments do not represent robust statistics since they do not exist for all distributions. The robust statistics is given by such measures of width which exist for all distributions of $y$ and of $x$. There are several classes of such robust measures either pertinent to the distribution itself, say, its quantiles, or to its characteristic function, say its scale parameter. These two possibilities will be discussed in the forthcoming sections. In all our discussions we will only concentrate on outliers, and both in our numerical examples and in the practical one $x_i$ are homogeneously distributed within a finite interval. \subsection{The interquantile distance regression} \subsubsection{Description of the method} One of the robust estimates of width is given by interquantile distance of the corresponding distribution (since the cumulative distribution function ({\it c.d.f.}) and therefore the quantiles do exist for any proper PDF). The practical realization for a given set of data $\{x_j\}$, $j=1..N$ is subdivided into two steps. Since the width of {\it c.d.f.} is invariant with respect to shifts, at the first step we consider the series $$ y^{i}_j=y_j-a_ix_j $$ and its {\it c.d.f.}'s $C(a_i)$ for the trial slopes $a_i$ equidistantly sampled with the step $h_a$ within some interval. We moreover fix some quantiles $q$ and $1-q$ defining the width to be minimized (in the following examples we set $q=1/4$). As it has been discussed above, the minimal half-width of $C(\hat{a})$ corresponds to the best fit of $a_i=\hat{a}$. For each $a_i$, the obtained set of values $y^{i}_j$ is sorted in ascending order to $\tilde{y}^{i}_j$, whence the desired {\it c.d.f.} half-width is simply \begin{equation} HW(C(a_i))=\tilde{y}^{i}_{[3N/4]}-\tilde{y}^{i}_{[N/4]}. \label{hwk} \end{equation} A search of the minimum for the series (\ref{hwk}) provides the index of desirable value $a_i=\hat{a}$. Here the square brackets denote an integer part of the fractions. Having obtained $\hat{a}$, one can obtain the shift parameter $\hat{b}$ as the median of the distribution of $y_j-\hat{a} x_i$. However, it should be pointed out that it might be preferable to obtain the value of $\hat{b}$ via the equidistant trials $b_i$, for which the {\it c.d.f.} of the series $$ y^{i}_j=y_j-\hat{a}x_i-b_i $$ has a median equal to zero instead of the single-run median search. This is the case for non-equispaced samples, since the algorithms for identifying the zero crossing provide better accuracy due to the possibility of interpolation. Practically, due to sample's discreteness, we use the criterion of minimum for $|\tilde{y}^{j}_{[N/2]}|$, where $\tilde{y}^{i}_j$ is again the series of ${y}^{i}_j$ sorted in ascending order. Thus both fitted parameters, $a$ and $b$ are determined. Although in our simple realization of the method we mostly obtain the quantiles by simply counting the points, we note that this can be done in a more elegant way using the quantile regression methods as pioneered in \cite{Koenker1978} (see \cite{Koenker} for the state of the art discussion). This general approach can be cast into the minimization problem, namely, in solving \begin{equation} \hat{a}=\mathrm{arg min}_{a\in\Re}\sum_{i=1;\, y_j\geq ax_i}^{n}q|y_i-ax_i|+\sum_{i=1;\, y_j< ax_i}^{n}(1-q)|y_i-ax_i|, \label{minsolv} \end{equation} where $0<q<1$ is the regression quantile sought for. Formally, the method requires the existence of the first moment of the $y$-distribution, and may lead to instabilities when applied to situations with large outliers, although we never encountered them is our test runs. \subsubsection{Maximizing sensitivity} It should be pointed out that although the approach works for an arbitrary part of {\it c.d.f}'s width, the important question is, what quantile has to be chosen to provide the largest local sensitivity of the method. Let us at the beginning consider a centered distribution and take $\hat{b}-b = 0$. Let us denote $\Delta a =\hat{a}-a$. The distribution of centered $y$ is a convolution of the distributions of $(\hat{a}-a)x$ and of $\xi$, since $\xi_i$ are independent on $x$. For $x$ homogeneously distributed on the interval $[-W/2, W/2]$ the convolution $\tilde{p}(y)$ of the corresponding distributions can be expressed via the cumulative distribution function $C(x)= \int_{-\infty}^x p_\xi(\xi)d\xi$, namely \begin{equation} \tilde{p}(y) = \frac{1}{W\Delta a} \left[C \left(y+\frac{W \Delta a}{2} \right) - C \left(y - \frac{W \Delta a}{2} \right) \right]. \label{eqpfull} \end{equation} For $\Delta a$ very large the distribution tends to a rectangular of width $W\Delta a$, so that its interquantile distance (for given quantiles of index $q$ and $1-q$) is linear in $\Delta a$. For $\Delta a$ small the dependence of interquantile distance on $\Delta a$ gets quadratic. Let us discuss this situation by expanding the cumulative functions $C$ in Eq.~(\ref{eqpfull}) in Taylor series around $y$. Since all even terms vanish, only the terms linear and cubic in $W \Delta a/2$ survive in the lowest orders, so that \begin{equation} \tilde{p}(y) = \frac{1}{W\Delta a} \left[C'(y) W \Delta a + C'''(y) \frac{W^3 \Delta a^3}{3} +... \right] = p(y) + \frac{p''(y)}{3}W^2 \Delta a^2. \label{taylor} \end{equation} The position $Q_q$ of the $q$-th quantile is given by \begin{equation} \int_{-\infty }^{Q_q} \tilde{p}(y) dy = q. \label{quant} \end{equation} Inserting the expression Eq.(\ref{taylor}) into Eq.(\ref{quant}) and performing the integration we get $Q_q$ as the solution of the equation \[ C(Q_q)+ \frac{W^2 \Delta a^2}{3} p'(Q_q) = q. \] We note that for $\Delta a = 0$ the solution of $C(Q_q) = q$ gives exactly the quantile of the distribution of the noise, so that \[ C(Q_q) - q = p(Q_q) \Delta Q_q \] is proportional to the shift of this quantile when detuning $a$. The highest sensitivity is attained when the largest absolute shift $|\Delta Q_q|$ for given $\Delta a$ is observed. Since \[ \Delta Q_q = - \frac{W^2}{3} \frac{p'(Q_q)}{p(Q_q)} \Delta a^2, \] this takes place when $q$ is chosen such that the absolute value of the logarithmic derivative \[ \left| \frac{p'(Q_q)}{p(Q_q)} \right| = \mathrm{max} \] is attained at the point $Q_q$ of the error distribution. For example for the Cauchy distribution this are \textit{exactly} the lower and the upper \textit{quartiles} of the distribution. For a Gaussian distribution, for which the logarithmic derivative equals to $Q_q$ the absolute relative change in the quantile \[ \left| \frac{\Delta Q_q}{Q_q} \right| = \frac{W^2}{3} \Delta a^2, \] doesn't depend on the index. However, it should be kept in mind in practical applications that the chosen quantile must contain a sufficient number of points. \subsubsection{Numerical example} Let us consider the signal $y=ax+b+\xi$, where $\xi$ is a random variable with the symmetric null-centered $\alpha$-stable density with the characteristic function \begin{equation} \phi(\omega)=\exp\left(-\gamma^{\alpha}|\omega|^{\alpha}\right), \label{pdf} \end{equation} where $\alpha\in(0,\,2]$ is the characteristic exponent and $\gamma>0$ is the scale parameter. Note that for $\alpha<2$ the second moment is absent, therefore the dispersion-based methods are inapplicable, and for $\alpha\in(0,\,1]$ even the mean value diverges, thus one can not apply the approaches calculating the absolute values of deviations. Fig.~\ref{example} demonstrates an example of the fitting for the function $y=0.5x+0.2$ corrupted by the white L\'evy noise with $\alpha=1$ (Cauchy distribution) with the scale parameter $\gamma=5$, i.e. with a quite large outliers, over the time interval $t\in[0,\,100]$ sampled with the unit step. The random numbers are generated by the routine {\tt stblrnd} \cite{routine} based on the methods presented in \cite{Chambers1976,Weron1995}. The sample is processed by the written MATLAB routine with the step size $0.001$ for both $a$ taken from the interval $[0,\,1]$ and $b$ taken from $[-10,\,10]$. The obtained pair $(a,b)=(0.515,\,0.149)$, while the conventional least squares method of linear fit provides sufficiently worse values $(0.664,\,-10.807)$. \begin{figure} \includegraphics[width=\textwidth]{examplelin} \caption{The initial deterministic process (thin solid line, almost invisible because it is overlapped by the fit line), its sample with the added L\'evy noise (circles) and the results of fitting by the proposed method (thick solid line) and by the conventional least squares method (dashed line).} \label{example} \end{figure} Fig.~\ref{halfwidth} demonstrates the behavior of the basic statistics of the method, the half-width of the cumulative distribution function. It is naturally irregular since a single random realization is processed. However, the global minimum is clearly visible even on the background of multiple small local ones. Note that the presence of local minima might be a problem if the global one is shallow, as it happens in the example of Sec. \ref{Pe}. Therefore it is always advisable to plot the curve like in Fig.~\ref{halfwidth} to be able to estimate the possible uncertainties caused by this effect. In the case when such uncertainties are large it is better to resort to the method described in the next section. \begin{figure} \includegraphics[width=\textwidth]{HW} \caption{The half-width of the cumulative distribution function for the samples $x_j-a_ix_j$ as a function of the trial slopes $a_i$ for the data shown in the Fig.~\ref{example}.} \label{halfwidth} \end{figure} Fig.~\ref{ensemble} shows the behavior of scaled half-width of the \textit{c.d.f.} for different characteristic exponents and scale factors, i.e. the $\alpha$-dependence of $HW/\gamma$. The curves are results of ensemble averaging over 10000 realizations. One can see that they all monotonically decrease when the distribution of errors tends to the normal distribution and have a universal shape (the deviations are within the error of averaging). This fact follows from the self-similarity of the distributions (\ref{pdf}) since their arguments depend on the combination $x/\gamma$ if the scale parameter is defined as in Eq.(\ref{pdf}). Additionally, this picture shows that although for different $\alpha$ these are the quantiles with different indices which are most sensitive to the deviation of $a$ from its best value $\hat{a}$, fixing interquantile distance (the half-width of {\it c.d.f} in our case) as a test statistics practically provides a uniform quality of slope's determination. \begin{figure} \includegraphics[width=\textwidth]{halfwidths} \caption{The dependence of the minimal width for the cumulative distribution functions normed by the scale coefficients for the various characteristic exponents $\alpha$ and scales $\gamma=0.5$ (diamonds connected by the solid line (blue in color online)), $\gamma=1$ (circles connected by the dashed line (black in color online)) and $\gamma=5$ (asterisks connected by the dash-dotted line (red in color online)).} \label{ensemble} \end{figure} \subsection{Scale parameter regression} Another method is based on the estimating width of the distribution via its characteristic function $f(k)=\langle \exp(iky) \rangle$, which is also an object which does exist for any proper distribution. Since the distribution of centered $y$ is a convolution of the distributions of $(\hat{a}-a)x$ and of $\xi$, its characteristic function $f_y(k)$ is the product of the characteristic functions of the distributions of $\xi$, $f_\xi(k)$, and of $\Delta a x$, being $f_{\Delta a}(k) = \int \exp(ik\Delta a x) p(x) dx = f_x(k \Delta a)$: \[ f_y(k) = f_\xi(k) f_x(k \Delta a). \] For example, for symmetric Levy noise with scale parameter $\gamma$ and homogeneous distribution of $x$ on $(-W/2,W/2)$ we get \begin{eqnarray} f_y(k) &=& \exp(-\gamma^{\alpha} |k|^\alpha) \frac{\sin(W \Delta a k /2)}{W \Delta a k /2} \nonumber \\ &\simeq& 1-\gamma^\alpha |k|^\alpha -\frac{W^2 \Delta a^2}{3} k^2 + ... \label{fsink} \end{eqnarray} (where the prefactor of $k^2$ is simply the dispersion of the distribution of $x$). Thus, fixing some $k$ (small enough so that the asymptotic expansion close to $k=0$ still works for both distributions of $x$ and of $\xi$), we can look for the maximum in $\hat{a}$ of $f_y(k)$ which is attained exactly at $\Delta a = 0$. Note that for Gaussian distribution of $\xi$ Eq.(\ref{fsink}) for small $k$ reduces to \[ f_y(k) \simeq 1- (\gamma^2 + \gamma_x^2 \Delta a^2) k^2, \] describing a centered distribution with the total dispersion \[ \gamma_{tot}^2=\gamma^2 + \gamma_x^2 \Delta a^2, \] so that minimizing the total width using the small-$k$ approach reduces to the minimizing of the dispersion of $y_i$; its approximation by an empirical estimator leads to the least squares method. The local sensitivity of the method is always given by $\gamma_x^2 k^2$ so that it can be influenced by a judicious choice of $k$ which has to be small enough to allow using the quadratic approximation (it depends e.g. on the higher moments of the $x$-distribution) but not too small to make the sensitivity too low. This appropriate value of $k$ for an arbitrary $\alpha$ can be determined by the following reasoning. The function $\sin(W \Delta a k /2)/(W \Delta a k /2)$ in the first line of Eq.(\ref{fsink}) is an oscillating function whose two roots closest to the global maximum at $k=0$ are located in $$ a=\hat{a}\pm\frac{\pi}{kW}. $$ Therefore, if the value of $\hat{a}$ can be restricted to $\hat{a}\in[-a_{max},\,a_{max}]$ by inspection, the frequency parameter can by taken as $k=\pi(a_{max}W)^{-1}$. This results in the location of the main maximum within the prescribed interval only. Therefore, the operational idea of the method is to calculate the empirical characteristic function \begin{equation} \hat{f}(k | a) = \frac{1}{N} \sum_{j=1}^N \exp\left[ik(y_j-ax_j)\right] \label{hatf} \end{equation} as an approximation for $f_y(k)$ for given $a$ and consider its dependence on $a$ for a fixed $k$ within the range described above. The shift parameter $b$ is omitted in Eq.~(\ref{hatf}) since it only introduces the phase multiplier \[ \hat{f}(k | a) \to e^{ikb} \hat{f}(k | a), \] which can be eliminated by considering \[ \phi(k, a) = |\hat{f}(k | a)| \] (or alternatively by centering in real space). Fig.~\ref{kdiffa} demonstrates the example of the behavior for the function $\hat{f}(k | a)$ calculated for a single realization of the same linear function corrupted by L{\'e}vy noise as in the Subsection 3.2 with the same spacing of the trial parameter $a$. One can clearly see the maximum sought for, which allows to determine $\hat{a}=0.504$, a better estimate than the one obtained by the method of the previous section. Moreover, the curve is much smoother in comparison with Fig.~\ref{halfwidth} which allows to avoid false extrema. The still undefined parameter $b$ can be determined using the median regression of detrended data $y_i-\hat{a} x_i$ as described above, since in the present approach it only enters the phase shift and can only be defined modulo $2\pi$. \begin{figure} \includegraphics[width=\textwidth]{kdiffa} \caption{The dependence of the characteristic function on the trial slopes around the main maximum. The parameters of the regular and noise components are the same as in Fig.~\ref{halfwidth}.} \label{kdiffa} \end{figure} \subsection{Comparison of the methods} Let us compare the efficiency of two proposed methods, primarily in determination of the line's slope. Since individual realizations, especially in the case of small $\alpha$, have a considerable variability, we performed the calculations for an ensemble of 1000 individual realizations (with the parameters given above), each of them fitted separately. Fig.~\ref{compL} presents the resulting average values of the slope and its root-mean-square deviations from the exact value $a=0.5$. Fig.~\ref{compA} shows a similar comparison for a fixed sample length ($L=200$) but for different indices $\alpha$ of the noise's distribution. \begin{figure} \includegraphics[width=\textwidth]{comparison} \caption{Upper panel: the ensemble averaged value of the slope determined via the quantile distribution width method (circles connected by solid lines) and via the characteristic function regression (asterisks connected by dashed lines) for various sample lengths. Lower panel: the root-mean-square deviations from the exact value for both methods. The parameters of the regular and noise components are the same in Fig.~\ref{halfwidth}.} \label{compL} \end{figure} One can see that both methods provide more than reasonable fitting even for very short samples. The method based on the characteristic function is more accurate for shortest samples that can be explained by the $2\pi$-periodicity of the random phase: since large outliers originated from L{\'evy} noise are relatively rare, their influence in the vicinity of the main frequency maximum is small for short samples while their presence in boundary quartiles strongly influences the half-width of {\it c.d.f}. For larger samples, the equivalent outliers $\xi_i$ and $\xi_i\,\mathrm{mod}\,2\pi$ result in larger errors in comparison with the results provided by the interquantile distance method. Let us turn to the $\alpha$-dependence. Two methods perform slightly differently at small $\alpha$ (Fig.~\ref{compA}), otherwise reproducing the corresponding values very accurately. The root-mean-square deviation of $\hat{a}$ from the exact value is a monotonically decaying function of the L{\'e}vy index for the characteristic function method. For the interquartile method it has a minimum around $\alpha=1$: this fact reflects various sensitivity of the method for different $\alpha$; taking quartiles produces maximal sensitivity exactly for $\alpha=1$. \begin{figure} \includegraphics[width=\textwidth]{comparison_alpha} \caption{Upper panel: the ensemble averaged value of the slope determined via the quantile distribution width method (circles connected by solid lines) and via the characteristic function regression (asterisks connected by dashed lines) for various indices of the noise distribution. Lower panel: the root-mean-square deviations from the exact value for both methods. The parameters of the regular and noise components are the same in Fig.~\ref{halfwidth}.} \label{compA} \end{figure} \section{Practical example} \label{Pe} \begin{figure} \includegraphics[width=\textwidth]{halley_data} \caption{Plot of the eastward component of the geomagnetic field D at Halley, Antarctica measured at X-min resolution from January 26 to December 28, 1998, after \cite{Clarke2003} (courtesy of M.P. Freeman, British Antarctic Survey) and the linear trend line with the coefficients determined via the scale parameter regression method. The time scale: seconds since January 1, 1998.} \label{halley_data} \end{figure} As a practical test we process geomagnetic field data measured by a fluxgate magnetometer located at Halley, Antarctica on the Brunt ice shelf. Such data are known to be complex comprising regular oscillations, highly irregular short bursts, and a linear trend originating from the ice shelf displacement \cite{Clarke2003}. It should be pointed out that the de-trending of such data is one of the key problems of ice shelf-based data processing \cite{Thomas1970}. Fig.~\ref{halley_data} demonstrates the example of such data, the small-scale processing of which has been discussed in the work \cite{Clarke2003}. Its authors highlighted the necessity of an additional median excluding even for very short portions of the data de-trended by a conventional method due to a presence of large outliers. The feature which makes this practical example different from our previous numerical ones is the correlated nature of the noise. However, one readily infers that the correlation time is short compared to the total measurement time, so that the methods should presumably work. The parameters of the noise were estimated as follows: the data were detrended by the least square fit, and then the routine {\tt stblfit} \cite{routine} was applied to check whether the de-trended distribution belongs to the class of alpha-stable ones. The process rapidly converges to the following parameters: the characteristic exponent $\alpha=1.39381$, the skewness $\beta=-0.0695959$, the scale parameter $\gamma=11.8844$ and the location $\delta=-2.33173$. Thus, one can assume to a good approximation that (up to the correlated nature of the noise) the situation belongs to the class described above: the practically symmetric L\'evy noise. The nonzero location parameter appears due to inconsistencies in determination of the shift parameter by the usual least square approach as discussed below. The estimates for $a$ and $b$ given by the least mean square (LMS) regression and by the scale parameter regression (SPR) are $(a,b)= \left(8.055\cdot10^{-6},-45.8\right)$ and $\left(8.006\cdot10^{-6},-46.4\right)$ respectively. The results of the quantile regression (QR) for different quantiles are given in Table \ref{intrqv}. \begin{table} [h!] \caption{The linear fit parameters for different interquantile distances.} \begin{tabular}{cc} Quantile interval&Parameters $(a,\,b)$ \\ \hline $[0.25\,- \, 0.75]$&$(8.237\cdot10^{-6},-50.1)$\\ $[0.30\,- \,0.70]$&$(8.126\cdot10^{-6},-48.3)$\\ $[0.40\,- \,0.60]$&$(8.257\cdot10^{-6},-50.0)$\\ $[0.475\,- \,0.525]$&$(8.161\cdot10^{-6},-48.9)$\\ \end{tabular} \label{intrqv} \end{table} One readily infers that the results of application of LMS and SPR procedures are quite similar, while the results of QR are stable with respect to the choice of the quantile, but overestimate the slope compared to the previous two methods. This fact can be traced back to the local irregularity of interquantile distance curve in the vicinity of a quite flat global minimum (see Fig.~\ref{HAmin}), whose flatness is partly due to the relatively large value of $\alpha$. Therefore, the fact that the interquantile regression, which \textit{on the average} might perform better than SPR for very long-tailed distributions in the case of small samples (compare with the results in Fig.~\ref{compA}), does not warrant for better performance in a single run for more regular noises and large samples. \begin{figure} \includegraphics[width=\textwidth]{Halleya} \caption{The intequantile distance between $Q_{0.7}$ and $Q_{0.3}$ as a function of the trial slopes $a_i$ for the data shown in the Fig.~\ref{halley_data}.} \label{HAmin} \end{figure} Let us now concentrate on the comparison of SPR and LMS procedures as applied to the subdivisions of the whole sample and to the sample with excluded outliers, with the goal to compare new and conventional approach. The parameters of linear fits for a set of intervals obtained via the subdivision the initial time interval into two and four parts are presented in Table~\ref{tabsubd}. One readily infers that the relative variation of slopes does not exceed $9\%$ for the scale parameter regression in contrast to more then $20\%$ for the conventional least squares fit. The latter results even in more irregular behavior of the shift parameter: for subdivision into four intervals it varies by a factor of 3 compared to merely $20\%$ as given by the robust method. Therefore, although large outliers influence the fitting results for both methods, the scale parameter regression allows for determination of the basic physical effect (speed of ice motion, which is a constant directly determining the trend's slope) more accurately. \begin{table} \caption{The comparison of linear fit parameters $(k,\,b)$ obtained by two methods (Scale Parameter Regression -- SPR and Least Mean Square Regression -- LMS) for the subdivided time intervals expressed as a ratio to the whole interval taken as a unit.} \begin{tabular}{ccc} Subinterval&SPR&LMS\\ \hline $[0,1]$&$\left(8.006\cdot10^{-6},-46.4\right)$&$\left(8.055\cdot10^{-6},-45.8\right)$\\ \hline $[0,1/2]$&$\left(8.121\cdot10^{-6},-48.2\right)$&$\left(8.222\cdot10^{-6},-47.5\right)$\\ $[1/2,1]$&$\left(7.731\cdot10^{-6},-40.0\right)$&$\left(7.690\cdot10^6,-36.9\right)$\\ \hline [0,1/4]&$\left(7.791\cdot10^{-6},-47.8\right)$&$\left(7.804\cdot10^{-6},-46.5\right)$\\ $[1/4,1/2]$&$\left(7.652\cdot10^{-6},-41.0\right)$&$\left(6.424\cdot10^{-6},-23.0\right)$\\ $[1/2,3/4]$&$\left(7.985\cdot10^{-6},-45.0\right)$&$\left(8.556\cdot10^{-6},-54.2\right)$\\ $[3/4,1]$&$\left(7.278\cdot10^{-6},-40.0\right)$&$\left(7.016\cdot10^{-6},-18.5\right)$\\ \hline \end{tabular} \label{tabsubd} \end{table} As the second test for comparison with standard approach to the processing of data with large outliers, we discuss linear fitting of the same sample with excluded outliers. At the first step, we de-trended the data by the mean least square fit, stated the cutoff level, above which the points were excluded, and finally processed initial sample without the excluded points again. Table~\ref{outelim} shows the results of processing of regularized data in comparison with the original ones. As it should be, the exclusion of points, whose deviation exceed $1\%$ of the maximal detected value, results in the equal (within a prescribed accuracy) coefficients of the linear fit. While the results for SPR practically do not change when adding the points with larger deviations, the ones of LMS show a considerable trend. \begin{table} \caption{The comparison of linear fit parameters $(k,\,b)$ obtained by two methods (Scale Parameter Regression -- SPR and Least Mean Square Regression -- LMS) after elimination of outliers.} \begin{tabular}{ccc} Cutoff level &SPR&LMS\\ \hline 100\%&$\left(8.006\cdot10^{-6},-46.4\right)$&$\left(8.055\cdot10^{-6},-45.8\right)$\\ 25\%&$\left(8.007\cdot10^{-6},-46.4\right)$&$\left(8.034\cdot10^{-6},-46.2\right)$\\ 1\%&$\left(8.008\cdot10^{-6},-46.5\right)$&$\left(8.007\cdot10^{-6},-46.8\right)$\\ \end{tabular} \label{outelim} \end{table} \section{Conclusions} The results of this work can be summarized as follows. We have discussed two methods for the robust linear fit to noisy signals, which can be applied to the case when the lower moments for the noise probability distribution diverge, e.g. for L{\'evy} noises. Both are based on the idea that the width of the distribution of the residues is the smallest when the slope of the regression line is chosen correctly, and differ in how this width is defined. The first method is the quantile regression approach. The second method deals with its counterpart in frequency domain, i.e. with the maximization of the trial characteristic function. Both approaches demonstrate their robustness and high accuracy for the noise distributions with extremely large outliers and may be used for a wide range of applications, for which such a behavior is characteristic, e.g. in problems of plasma dynamics, econophysics, etc. As a practical test we apply the methods to the data of the geomagnetic field measurements by a detector placed on an Antarctic ice shelf, showing large irregularity, and compare their performance to the one of standard approaches. In this case the scale parameter regression seems to perform the best. \section*{Acknowledgments} We gratefully thank Dr. N.W. Watkins (LSE) for suggesting the geomagnetic data example and Dr. M.P. Freeman (British Antarctic Survey) for kindly providing the experimental data from Halley, Antarctica. This work is partially supported by grant no. 1391 of the Ministry of Education and Science of the Russian Federation within the basic part of research funding no. 2014/349 assigned to Kursk State University and by DFG (project SO 307/4-1).
{ "timestamp": "2015-04-14T02:14:44", "yymm": "1504", "arxiv_id": "1504.03188", "language": "en", "url": "https://arxiv.org/abs/1504.03188" }
\section{Introduction} Gravitational waves (GWs) promise a new and exciting window to the cosmos. Two ground-based interferometer experiments, LIGO and VIRGO, are about to restart operations with greatly increased sensitivity \cite{Harry:2010zz,Accadia:2009zz}, and will be joined in a few years by KAGRA \cite{Somiya:2011np}. Once working at their design sensitivity, they are expected to quickly detect gravitational wave signals from binary neutron stars \cite{Abadie:2010cf}. Gravitational waves also offer a unique way to learn about the early universe. A range of phenomena, such as inflation, topological defects and phase transitions may lead to observable gravitational wave signals across a wide range of frequencies (for a review see \cite{Binetruy:2012ze}). There are a number of proposals to realise a gravitational wave detector in space, in the first place eLISA \cite{Seoane:2013qna}, which is scheduled for launch in 2034. Space-based detectors have much longer arm lengths than ground based ones, and have maximum sensitivity in a frequency range which is relevant for a first order phase transition at the electroweak scale. Given these exciting observational prospects, we revisit the generation of gravitational waves in first order thermal phase transitions in the early universe. We have in mind an electroweak-scale phase transition, but nothing in our formalism is specific to electroweak scale physics. In the Standard Model the electroweak transition is known to be a cross-over \cite{Kajantie:1995kf,Kajantie:1996mn,Gurtler:1997hr,Csikor:1998eu,DOnofrio:2014kta}, which does not lead to a gravitational wave signal. However, a strong first order phase transition is possible in various extensions of the Standard Model~\cite{Carena:1996wj,Delepine:1996vn,Laine:1998qk,Grojean:2004xa,Huber:2000mg,Huber:2006wf,Dorsch:2013wja}. We reduce the original physical system to a model consisting of a scalar order parameter field coupled to an ideal fluid. The parameters of the model can in principle be fixed by matching to the thermodynamical quantities of the original theory. We perform very large scale numerical simulations to determine the fluid and gravitational wave power spectra. The ultimate goal is to understand what information on the phase transition can be extracted from the future observation of a gravitational wave signal. Since the early nineties there have been a number of studies of gravitational waves from phase transitions. In Refs.~\cite{Turner:1990rc,Kosowsky:1991ua,Kosowsky:1992rz,Kosowsky:1992vn}, the case of a scalar field only, i.e.\ a vacuum transition without fluid, was considered, motivated by models of inflation terminated by a first order transition. Vacuum transitions during inflation and with a fluid were considered in Ref.\ \cite{Chialva:2010jt}. In a vacuum transition, all the energy released goes into the bubble wall, which as a result is accelerated to the speed of light. After solving numerically the field equations for the collision of two scalar bubbles \cite{Hawking:1982ga,Kosowsky:1991ua}, it was realised that the energy-momentum tensor sourcing the gravitational wave production can be approximated by the envelope of the colliding bubbles \cite{Kosowsky:1992rz,Kosowsky:1992vn}. This ``envelope approximation'' models a configuration of expanding bubbles by the overlap of a corresponding set of infinitely thin shells. The envelope disappears once the transition is completed and gravitational wave production stops. It is found that the gravitational wave spectrum peaks at a frequency determined by the average bubble size at collision. In the UV, the spectrum falls as a power law, subsequently shown to be $k^{-1}$ \cite{Huber:2008hg}, where $k$ is the wave number. Numerical studies in Ref.~\cite{Child:2012qg} did not have the dynamic range to clearly confirm this behaviour, but the larger simulations done for the present work show some supporting evidence. The case of a thermal phase transition, where the scalar field is coupled to a fluid, is more complicated. The nucleated bubbles will show accelerated expansion until the pressure inside is balanced by friction caused by the plasma. The bubbles then expand with a constant velocity. This is because the energy released by the transition grows with the volume of the bubble, i.e. $\sim R^3$, while the energy transferred to the scalar bubble wall only grows with the bubble surface, i.e. $\sim R^2$, where $R$ denotes the bubble radius. Hence only a tiny fraction of the released energy, on the order of the ratio of the initial to final bubble radius, stays in the scalar field. In the case of a first order electroweak scale thermal phase transition this ratio is about $10^{-4}M_W/M_{\rm Pl}~\sim10^{-13}$. Therefore gravitational wave production in thermal phase transitions is completely dominated by the fluid.\footnote{An exception may be the case where the bubble wall ``runs away'', i.e. friction is not sufficient to prevent the wall from approaching the speed of light~\cite{Bodeker:2009qy}, similar to a vacuum transition. Then both the scalar and the fluid could contribute sizeably to the generated gravitational wave signal.} The energy which is released into the fluid mostly goes into reheating the plasma. A small and calculable fraction \cite{Kamionkowski:1993fg,Espinosa:2010hh} goes into bulk motion of the fluid and can source gravitational waves. Having established the fluid as the main source of gravitational waves, the question of the production mechanism arises. Several mechanisms have been suggested and studied in the literature. In the simplest approach, one assumes that the fluid put into motion by the scalar wall can still be treated as a thin shell and the energy momentum tensor sourcing gravitational wave production can again be approximated by the shell overlaps~\cite{Kamionkowski:1993fg}. In this case gravitational wave production finishes with the completion of the phase transition, and a characteristic prediction is the $k^{-1}$ UV power law of the spectrum. Another possibility is that the collision of bubbles induces turbulent motion of the fluid \cite{Kamionkowski:1993fg}. The resulting eddies generate gravitational waves even after the transition is completed~\cite{Kamionkowski:1993fg,Caprini:2006jb,Gogoberidze:2007an,Caprini:2009yp}. Various UV power laws of the the gravitational wave spectrum have been suggested in this context, such as $k^{-3.5}$~\cite{Dolgov:2002ra} and $k^{-8/3}$~\cite{Caprini:2006jb}. To shed light onto these competing scenarios, we recently performed large scale numerical simulations of a thermal phase transition of a scalar field plus fluid system~\cite{Hindmarsh:2013xza}. We found no indications that fluid turbulence was an important source of gravitational radiation. Instead sound waves are generated by the explosive bubble growth, which propagate through the plasma until long after the transition is completed. In our simulations these sound waves are the dominant source of gravitational waves. After the phase transition, the fluid energy-momentum tensor clearly does not show the form assumed in the envelope approximation. The nearly linear behaviour of sound waves is very different to the highly nonlinear behaviour of the scalar field. Other numerical simulations of the generation of gravitational waves by the coupled field-fluid system, using an explicit update algorithm for the fluid, have been described recently in Ref.~\cite{Giblin:2014qia}. The generation of gravitational waves through sound in QCD and electroweak phase transitions was also recently studied in Ref.~\cite{Kalaydzhyan:2014wca}, with special focus on the effect of possible non-linear sound dispersion relations, which were argued to lead to an inverse acoustic cascade. In Ref.~\cite{Ghiglieri:2015nfa}, generation of gravitational waves in the Standard Model in the absence of such a cascade was discussed. In the present work we simulate at larger volumes and larger average bubble separations than in \cite{Hindmarsh:2013xza}, for the same range of bubble wall speeds and phase transition strengths. We widen the dynamic range even more by nucleating all bubbles at the same time. We confirm that the gravitational wave density parameter is proportional to the fourth power of the mean square fluid velocity, the ratio of lifetime of the source to the Hubble time, and the ratio of length scale of the source to the Hubble length. We measure the length scale of the source, approximately the average bubble separation in \cite{Hindmarsh:2013xza}, directly from the fluid flow. With this improvement, the proportionality constant for the gravitational wave density parameter varies much less between phase transitions with different strengths and bubble wall speeds. Our measurements show that it is $0.8 \pm 0.1$, where the uncertainly is the root mean square fluctuation between simulations. We show that the resulting gravitational wave spectrum exhibits UV power laws which are clearly steeper than the $k^{-1}$ predicted by the envelope approximation. In the case of deflagrations (where the bubble walls are subsonic), we are reasonably confident that the power law is $k^{-3}$. For detonations we do not have sufficient dynamic range to be certain of the power law index. We compare the acoustic gravitational waves with the standard prediction from the envelope approximation. We argue that the envelope approximation is based on an incorrect picture of the dynamics of the fluid, in which the fluid perturbations are destroyed by bubble collisions in the same way as the bubble walls. Instead, they pass through one another, and keep oscillating, resulting in a gravitational wave source whose effective lifetime is the Hubble time. The true gravitational wave energy density is therefore a factor $\beta/\Hc$ higher, where $\Hc$ is the Hubble rate at the phase transition, and $\beta^{-1}$ is the duration of the phase transition. For a thermal electroweak-scale phase transition, the gravitational wave signal is larger than hitherto believed by at least two orders of magnitude. \section{First order phase transitions in cosmology} \subsection{Hydrodynamics} We describe the phase transition using the cosmic fluid -- order parameter field model~\cite{Ignatius:1993qn,KurkiSuonio:1995vy}, which we summarise here. The model contains a classical scalar field $\phi$ (effective order parameter), which is coupled to ideal fluid hydrodynamics. The variables describing the local state of the matter are local temperature $T$, fluid 4-velocity $U^\mu$ and the scalar order parameter field $\phi$. The first order dynamics are obtained by introducing a temperature dependent effective potential $V(\phi,T)$. Following \cite{Enqvist:1991xw,Ignatius:1993qn}, we use a simple $\phi^4$ form for the potential: \begin{equation} \label{e:ScaPot} V(\phi, T) = \frac{1}{2} \gamma (T^2-T_0^2) \phi^2 - \frac{1}{3} A T \phi^3 + \frac{1}{4}\lambda\phi^4. \end{equation} The detailed form of the potential is not important, as long as it allows for a first order phase transition of sufficient strength. A first order transition occurs if $2A^2 < 9\lambda\gamma$. The equation of state of the coupled scalar field and fluid system is \begin{align} \epsilon(T,\phi) &= 3 a T^4 + V(\phi,T) - T\frac{\partial V}{\partial T},\\ p(T,\phi) &= a T^4 - V(\phi,T) \end{align} where $a=(\pi^2/90)g_*$, and $g_*$ is the effective number of degrees of freedom. The latent heat density (usually just called the latent heat) is \begin{equation} {\cal L}(T) = w(T,0) - w(T,\phi_\text{b}) \end{equation} where $w = \epsilon + p$ is the enthalpy density, and $\phi_\text{b}$ is the equilibrium value of the field in the symmetry-broken phase at temperature $T$. The strength of the transition can be characterised by the ratio of the latent heat to the total radiation density in the high temperature symmetric phase, \begin{equation} \strengthPar{T} = \frac{{\cal L}(T)}{3aT^4}. \label{e:StrParA} \end{equation} The total energy-momentum tensor of the system can be written as \begin{equation} \label{eq:tmunu} T^{\mu\nu} = \partial^\mu \phi \partial^\nu \phi - {\textstyle \frac12} g^{\mu\nu} (\partial\phi)^2 + \left[\epsilon + p \right] U^\mu U^\nu + g^{\mu\nu} p \end{equation} where the metric convention is (-+++). The energy-momentum tensor is conserved, $\partial_\mu T^{\mu\nu} = 0$. The interaction between field gradients and the fluid is introduced by splitting the conserved current nonuniquely into field and fluid parts, which are then coupled together through a dissipative term proportional to field gradient: \begin{align} [\partial_\mu T^{\mu\nu}]_{\rm field} &= (\partial_\mu\partial^\mu\phi) \partial^\nu\phi - \frac{\partial V}{\partial\phi} \partial^\nu \phi = \delta^\nu \label{t-munu-1}\\ [\partial_\mu T^{\mu\nu}]_{\rm fluid} &= \partial_\mu[(\epsilon+p)U^\mu U^\nu] - \partial^\nu p + \frac{\partial V}{\partial \phi} \partial^\nu \phi = -\delta^\nu, \label{t-munu-2} \end{align} where the coupling term is \begin{equation} \label{e:CouTer} \delta^\nu = \eta U^\mu \partial_\mu\phi \partial^\nu \phi \end{equation} with $\eta$ an adjustable friction parameter \cite{Ignatius:1993qn}. Equations analogous to Eqs.~(\ref{t-munu-1})-(\ref{t-munu-2}) can, at least in principle, be derived from field theory (see e.g.\ \cite{Moore:1995si,Konstandin:2014zta}), but the simplified model here is adequate for parametrising the entropy production~\cite{KurkiSuonio:1996rk}. From Eqs.~(\ref{t-munu-1}) and~(\ref{t-munu-2}) we can derive the equations of motion in a form suitable for numerical simulation. For the field we obtain \begin{equation} - \ddot{\phi} + \nabla^2 \phi - \frac{\partial V}{\partial \phi} = \eta W (\dot{\phi} + V^i \partial_i \phi)\,, \label{eqfield} \end{equation} where $W$ is the relativistic $\gamma$-factor and $V^i$ is the fluid 3-velocity, $U^i = WV^i$. For the fluid energy density $E=W\epsilon$, contracting $[\partial_\mu T^{\mu\nu}]_{\rm fluid}$ with $U_\nu$ gives \begin{multline} \dot{E} + \partial_i (E V^i) + p [\dot{W} + \partial_i (W V^i)] - \frac{\partial V}{\partial \phi} W (\dot{\phi} + V^i \partial_i \phi) \\ = \eta W^2 (\dot{\phi} + V^i \partial_i \phi)^2. \label{eqE} \end{multline} Finally, the equations of motion for the fluid momentum density $Z_i = W(\epsilon+p) U_i$ are \begin{equation} \dot{Z}_i + \partial_j(Z_i V^j) + \partial_i p + \frac{\partial V}{\partial \phi} \partial_i \phi = -\eta W (\dot{\phi} + V^j \partial_j \phi)\partial_i \phi. \label{eqZ} \end{equation} The implementation of Eqs.~(\ref{eqfield})-(\ref{eqZ}) on a discrete lattice is described in Section \ref{sect:numerical}. The parameters of the potential in Eq.~(\ref{e:ScaPot}) are related to thermodynamic quantities at the phase transition: the critical temperature $T_c$, latent heat ${\cal L}(\Tc)$, surface tension $\sigma$ and the broken phase correlation length (which is also of order the bubble wall thickness) $\ell$~\cite{Enqvist:1991xw} \begin{align} T^2_c &= \frac{T_0^2}{1-2A^2/(9\lambda\gamma)} \\ {\cal L} &= \frac{A^2\gamma}{\lambda^2} T_0^2 T_c^2 \\ \sigma &= \frac{2\sqrt{2}}{81}\frac{A^3}{\lambda^{5/2}} T_c^3 \\ \ell^2 &= \frac{9\lambda}{2A^2} \frac{1}{T_c^{2}}. \label{e:PhaTraPar} \end{align} Due to supercooling, the phase transition (bubble nucleation) starts at temperature $T_N$, where $T_0 < T_N < T_c$. We are mostly interested in the large supercooling (LSC) case, where $T_N$ is typically somewhere in the middle between $T_0$ and $T_c$. However, we emphasise that our focus in this work is not the nucleation of critical bubbles, which in a given microscopic theory is a thermal field theory problem and can be studied in perturbation theory or with numerical simulations~\cite{Moore:2000jw}. In our simulations both the density of the initial bubbles and the nucleation temperature $T_N$ are set by hand. \subsection{Bubble nucleation} \label{ss:BubNuc} The phase transition proceeds by the nucleation and growth of bubbles of the broken phase~\cite{Linde:1978px,Linde:1981zj}. Bubble nucleation occurs at an exponentially growing rate per unit volume below the critical temperature~\cite{Hogan:1984hx,Enqvist:1991xw}, \begin{equation} p(t) \simeq \Gamma_0 e^{-S(\tN) + \be(t - \tN)}, \label{e:TunRat} \end{equation} where $-\be$ is the time derivative of the action of the critical bubble $S(t)$, and $\Gamma_0$ is a dimensional prefactor of order $\alpha_W^5 \Tc^4$ \cite{Moore:2000jw}, where $\alpha_W \approx 1/30$. The nucleation time $\tN$ can be defined to be the time at which the nucleation rate reaches one bubble per Hubble volume per Hubble time, or $p(\tN) = H^4(\tN)$. The tunnelling rate parameter $\be$ not only sets the timescale of the transition, but also the average separation between bubbles once the transition has completed, $\Rbc$. Having defined $\Rbc$ to be the inverse cube root of the number density of bubbles, it can be shown that \cite{Enqvist:1991xw} \begin{equation} \Rbc = (8\pi)^{\frac13} \frac{\vw}{\be}. \label{e:RbcBet} \end{equation} Strictly, Eq.~(\ref{e:RbcBet}) applies only for detonations. For deflagrations, one should take into account the suppression of the tunnelling rate ahead of the bubble wall, where the fluid is heated by the release of latent heat. In this case we would expect $\Rbc \sim {\cs}/{\be}$. The important ratio ${\be}/{\Hc}$ (the transition rate relative to the Hubble rate) follows from simple considerations of the temperature of the transition \cite{Hogan:1984hx}. One can straightforwardly argue that \begin{equation} S(\tN) \sim 4\ln (m_{\text{P}}/\TN), \end{equation} and that for tunnelling in a thermal effective potential (\ref{e:ScaPot}) \begin{equation} \frac{\be}{\Hc} \simeq \frac{2 S(\tN)}{(1 - \TN/\Tc)}. \end{equation} Hence, for a thermal electroweak-scale transition, the critical bubble action must be $\mathrm{O}(10^2)$, and the ratio $\be/\Hc$ must be at least $\mathrm{O}(10^2)$. A detailed non-perturbative evaluation of the bubble nucleation rate in the standard model electroweak theory is presented in Ref.~\cite{Moore:2000jw}, using an unphysically small Higgs mass in order to ensure a first order phase transition. In this case the critical bubble action was found to be $\approx 90$, and $\be/\Hc \approx 2\times 10^4$. These are expected to be generic numbers for any first order thermal electroweak-scale transition. \section{Theory of GW generation} \label{s:TheGWGen} \subsection{GW power spectrum definition} A gravitational wave is a propagating mode of the transverse and traceless part of the metric perturbation, $h_{ij}$. We are interested in calculating the gravitational wave energy density power spectrum, where the gravitational wave energy-momentum tensor is \begin{equation} T_{\mu\nu}^\text{GW} = \frac{1}{32\pi G}\left< \partial_\mu h_{ij} \partial_\nu h_{ij} \right>. \end{equation} To this end, we define the spectral density of the time derivative of the metric perturbation $\SpecDen{\dot h}(\textbf{k},t)$ by \begin{equation} \vev{\dot{h}_{ij}(\textbf{k},t) \dot{h}_{ij}(\textbf{k}',t) } = \SpecDen{\dot h}(\textbf{k},t) \debar3{\textbf{k}+\textbf{k}'}. \end{equation} The gravitational wave energy density power spectrum is then \begin{equation} \frac{d \rho_\text{GW}}{d \ln(k)} = \frac{1}{32\pi G} \frac{k^3}{2\pi^2} \SpecDen{\dot h}(\textbf{k},t). \label{e:GWPowSpeDef} \end{equation} \subsection{GW power spectrum from fluid and field} The source of gravitational waves is the transverse traceless part of the spatial components of the energy-momentum tensor. Given that we will be removing the trace anyway, it suffices to consider a source tensor $\tau_{ij} = \tau^\phi_{ij} + \tau^\text{f}_{ij}$, which is decomposed into fluid and field pieces according to \begin{equation} \label{e:TauDef} \tau^\phi_{ij} = \partial_i \phi \partial_j \phi, \quad\tau^\text{f}_{ij} = W^2 (\epsilon + p)V_i V_j. \end{equation} The physical metric perturbations are recovered in momentum space by applying the projector onto transverse, traceless symmetric rank 2 tensors: \begin{equation} \label{e:ProDef} \lambda_{ij,kl}(\textbf{k}) = P_{ik}(\textbf{k}) P_{jl}(\textbf{k}) - \frac12 P_{ij}(\textbf{k})P_{kl}(\textbf{k}) \end{equation} with \begin{equation} P_{ij}(\textbf{k}) = \delta_{ij} - \hat{k}_i \hat{k}_j. \end{equation} The particular solution for the gravitational wave is therefore \begin{equation} h_{ij} (\mathbf{k},t) = (16\pi G)\lambda_{ij,kl}(\textbf{k}) \int_0^t dt' \frac{\sin[k(t-t')]}{k} \tau_{kl}(\textbf{k},t'), \label{e:GWsol} \end{equation} where we have assumed that the source vanishes for $t'<0$. Using the fact that the fluid shear stress dominates the spatial parts of the energy-momentum tensor, we write \begin{widetext} \begin{eqnarray} \vev{\dot{h}^{ij}_\textbf{k}(t) \dot{h}^{ij}_{\textbf{k}'}(t) } &=& (16\pi G)^2 \int^t_0 dt_1 dt_2 \cos[ k(t - t_1)] \cos[ k(t - t_2)] \lambda_{ij,kl}(\textbf{k})\vev{\tau_\text{f}^{ij}(\textbf{k},t_1)\tau_\text{f}^{kl}(\textbf{k}',t_2)}. \end{eqnarray} Introducing the unequal time correlator (UETC) of the fluid shear stress $\Pi^2$ \cite{Caprini:2009fx,Figueroa:2012kw} through \begin{equation} \lambda_{ij,kl}(\textbf{k})\vev{\tau_\text{f}^{ij}(\textbf{k},t_1)\tau_\text{f}^{kl}(\textbf{k}',t_2)} = \Pi^2(k,t_1,t_2) \debar3{\textbf{k} + \textbf{k}'} \end{equation} and averaging over a period $T$, much longer than the periods of the gravitational waves of interest, we can write \begin{equation} \SpecDen{\dot h}(k,t) = (16\pi G)^2 \int^t_0 dt_1 dt_2 \frac{\cos[ k(t_1 - t_2)]}{2} \Pi^2(k,t_1,t_2). \end{equation} \end{widetext} On dimensional grounds, we can write the UETC as \begin{equation} \label{e:UETCmod} \Pi^2(k,t_1,t_2) \simeq [(\bar\epsilon+\bar p)\fluidV^2]^2L_\text{f}^3\tilde\Pi^2 \end{equation} where $\bar\epsilon$ and $\bar{p}$ are the spatially averaged energy density and pressure; $\fluidV$ is the root mean square fluid velocity, defined through \begin{eqnarray} \label{e:fluidVdef} (\bar\epsilon+\bar p) \fluidV^2 &=& \frac{1}{{\mathcal V}}\int_{{\mathcal V}} d^3x\tau^\text{f}_{ii}, \end{eqnarray} where ${\mathcal V}$ is the averaging volume; $L_\text{f}$ is a characteristic length scale in the velocity field; and $\tilde\Pi^2$ is a dimensionless function of $k$, $t_1$ and $t_2$. In Ref.~\cite{Hindmarsh:2013xza} we estimated that $L_\text{f}$ would be the mean bubble separation, but we will not make that assumption yet. We will see that we can understand the numerical results better if we extract the scale directly from the fluid velocity field in the simulations. We also assumed that the UETC would be a function of $t_1 - t_2$ for times between the nucleation time $\tN$ and the lifetime of the velocity perturbations $\tau_\text{v}$, and that there is no separate timescale in the function $\tilde\Pi^2$, apart from that generated from $L_\text{f}$, the speed of sound $\cs$, and the speed of light. With these assumptions, we can write the dimensionless UETC as a function of $kL_\text{f}$ and $z = k(t_1 - t_2)$, and the spectral density of $\dot h$ becomes \begin{widetext} \begin{equation} \SpecDen{\dot h}(k,t) = \left[16\pi G(\bar\epsilon+\bar p)\fluidV^2\right]^2 t k^{-1} L_\text{f}^3 \int dz \frac{\cos(z)}{2} \tilde\Pi^2(kL_\text{f},z). \label{e:SpeDenExpA} \end{equation} Note that one could follow through the same arguments for the scalar field, which would contribute in exactly an analogous manner \begin{equation} \SpecDen{\dot h}^\phi(k,t) = \left[16\pi G(\bar\epsilon+\bar p)\fieldV^2\right]^2 t k^{-1} L_\phi^3\int dz \frac{\cos(z)}{2} \tilde\Pi^2_\phi(kL_\phi,z), \end{equation} \end{widetext} where \begin{eqnarray} \label{e:fieldVdef} (\bar\epsilon+\bar p) \fieldV^2 &=& \frac{1}{{\mathcal V}}\int_{{\mathcal V}} d^3x\tau^\phi_{ii}, \end{eqnarray} $L_\phi$ is a characteristic scale in the scalar field configuration, and $ \tilde\Pi^2_\phi$ is the dimensionless unequal time correlator of the scalar field shear stress tensor. However, as explained in the introduction, the field contribution is negligible in most phase transitions. Hence, putting together (\ref{e:GWPowSpeDef}) and (\ref{e:SpeDenExpA}), we may write the gravitational wave energy density power spectrum as \begin{equation} \frac{d \rho_\text{GW}}{d \ln(k)} = 8 \pi G\left[(\bar\epsilon+\bar p)\fluidV^2\right]^2 t L_\text{f} \frac{(kL_\text{f})^3}{2\pi^2} \tilde P_{\text{GW}}(kL_\text{f}), \label{e:GWEneDenPowSpe} \end{equation} where \begin{equation} \tilde P_{\text{GW}}(kL_\text{f}) = \frac{1}{kL_\text{f}}\int dz \frac{\cos(z)}{2} \tilde\Pi^2(kL_\text{f},z), \label{e:NoDimSpecDen} \end{equation} is a dimensionless spectral density for the gravitational waves. The gravitational wave power spectrum at time $t$ can then be written \begin{eqnarray} \rho_\text{GW} &=& (\bar\epsilon+\bar p)^2 \fluidV^4 (tL_\text{f}) (8\pi G \tilde\Omega_\text{GW}), \label{e:RhoGWdot} \end{eqnarray} where \begin{equation} \tilde\Omega_\text{GW} = \int \frac{dk}{k}\frac{(kL_\text{f})^3}{2\pi^2} \tilde P_{\text{GW}}(kL_\text{f}) \end{equation} is a dimensionless number. We see that the gravitational wave power spectrum grows linearly with time, for as long as the velocity perturbations are active, with a slope which depends on the square of the enthalpy density, the fourth power of the mean square fluid velocity, the fluid length scale, and a dimensionless number describing the fluid flow $\tilde\Omega_\text{GW}$. In principle, the value of $\tilde\Omega_\text{GW}$ depends on the parameters of the phase transition in dimensionless combinations, which we can expect to include the bubble wall speed $\vw$ and the latent heat relative to the total energy density $\strengthPar{T}$. In Fig.\ 2 (bottom) of \cite{Hindmarsh:2013xza}, we plotted $\rho_\text{GW}/\left[(\bar\epsilon+\bar p)^2 \fluidV^4 L_\text{f}\right]$ against time. Noting that we have $G=1$, the slope of the graph is $8\pi \tilde\Omega_\text{GW}$. We found that $\tilde\Omega_\text{GW}$ was approximately constant, varying by no more than a factor 2, when we took the fluid scale $L_\text{f}$ to be the mean bubble separation $\Rbc = \sqrt[3]{V/N_\text{b}}$. Hence most of the dependence of the gravitational radiation energy density on the phase transition parameters is accounted for by the explicit factors in Eq.~(\ref{e:RhoGWdot}). \subsection{Integral scale} \label{ss:IntSca} The question of which scale to take for $L_\text{f}$ affects the value of $\tilde\Omega_\text{GW}$, and hence its variation between simulations. As mentioned above, in Ref.~\cite{Hindmarsh:2013xza} we took the scale to be $\Rbc$, the average bubble separation. However, one could equally estimate the length scale from the velocity field itself, and to this end we can use the following quantity (sometimes referred to as the integral scale) \begin{equation} \xi_\text{f} = \frac{1}{\vev{V^2}}\int \frac{d^3k}{(2\pi)^3} |k|^{-1} \SpecDen{V}(k), \end{equation} where $\vev{V^2}$ is the RMS velocity. We will see that when the scale $L_\text{f}$ is chosen to be the integral scale, the variation in the parameter $\tilde\Omega_\text{GW}$ is reduced to about 10\%. This emergence of $\tilde\Omega_\text{GW}$ as a quasi-universal constant for first order phase transitions with $\strengthPar{\TN} \lesssim 0.1$ is an important result. One can also define an integral scale $\xi_\text{GW}$ for the gravitational wave energy density from its spectral density $\tilde P_{\text{GW}}$. We will also confirm that the integral scale of the gravitational radiation is related to the integral scale of the velocity field, as one would expect. \begin{widetext} \subsection{Dimensionless GW power spectrum parameter $\tilde\Omega_\text{GW}$ } It is often useful to express the gravitational wave power spectrum as a fraction of the critical density, $\rho_\text{c} = 3H^2/8\pi G$. Hence we are led to consider a dimensionless gravitational wave power spectrum \begin{equation} \frac{d \Omega_\text{GW}(k,t)}{d \ln(k)} = \left[16\pi G(\bar\epsilon+\bar p)\fluidV^2\right]^2 \frac{t L_\text{f}}{\Hc^2} \frac{(kL_\text{f})^3}{24\pi^2} \tilde P_{\text{GW}}(kL_\text{f}), \end{equation} where $\Hc$ is the Hubble parameter at the time the bubbles are nucleated. Noting that the critical density is the energy density $\bar\epsilon$, and denoting the lifetime of the source by $\tau_\text{v}$, we find that the dimensionless gravitational wave power spectrum during the radiation era can be written \begin{eqnarray} \label{e:GWPowSpe} \frac{d \Omega_\text{GW}(k)}{d \ln(k)} &=& 3(1+w)^2 \fluidV^4 (\Hc \tau_\text{v}) (\HcL_\text{f}) \frac{(kL_\text{f})^3}{2\pi^2} \tilde P_{\text{GW}}(kL_\text{f}), \end{eqnarray} \end{widetext} where $w = \bar{p}/\bar{\epsilon}$ is the equation of state parameter. Integrating over wavenumber, we see that the total relative energy density is \begin{equation} \Omega_\text{GW} = 3(1+w)^2 \fluidV^4 (\Hc \tau_\text{v}) (\HcL_\text{f}) \tilde\Omega_\text{GW}. \label{e:OmgwEqn} \end{equation} \subsection{Source lifetime} \label{ss:LifSouWav} It is clearly important for the calculation of the gravitational wave energy density to calculate the lifetime of the source, the shear stress caused by the sound waves. We show in Appendix \ref{s:GWExpUni} that in an expanding universe, the shear stresses decay and decorrelate in such a way to make $\tau_\text{v}$ precisely equal to the Hubble time. The shear stresses also decay due to the viscosity of the fluid at a scale-dependent rate. We should therefore estimate on which scales viscous damping time is smaller than $\tau_\text{v}$. For linear non-relativistic flows induced by sound waves (i.e.\ for velocity fields $V^i_\parallel$ which are purely longitudinal), viscosity adds a term of the form \begin{equation} \left(\frac{4}{3}\etaS + \zetaB\right) \nabla^2 V^i_\parallel \end{equation} to the left hand side of Eq.\ (\ref{eqZ}), where $\etaS$ is the shear viscosity and $\zetaB$ is the bulk viscosity. For a plasma of relativistic particles in a gauge theory, the bulk viscosity is negligible compared to the shear viscosity~\cite{Arnold:2006fz}, and the shear viscosity can be estimated as \begin{equation} \etaS \sim T^3/e^4 \ln(1/e), \end{equation} where $e$ is the electromagnetic gauge coupling~\cite{Arnold:2000dr}. Hence velocity perturbations of wavenumber $k$ are damped as $\exp(-4 \etaS k^2 t/3)$, and the lifetime due to viscous damping of sound waves with wavelength $R$ is \begin{equation} \Tvisc(R) \sim R^2\epsilon/\etaS \sim e^4 \ln(1/e)R^2 T. \end{equation} Therefore, at a transition with temperature just below the critical temperature $\Tc$, the viscous damping lifetime exceeds the Hubble time $\Hc^{-1}$ for all scales \begin{equation} R \gg \frac{\vw}{\Hc}\left(\frac{\sqrt{a}\Tc}{m_{\text{P}} e^4}\right) \sim 10^{-11} \frac{\vw}{\Hc}\left(\frac{\Tc}{100\; \text{GeV}}\right), \end{equation} where we have neglected the logarithm of the gauge coupling. We will see in the next section that the scale of the fluid perturbations is set by the average separation of the nucleating bubbles $\Rbc$, and that the bubble separation at an electroweak-scale phase transition with any interesting degree of supercooling will satisfy this inequality. Hence for a first order transition at the electroweak scale -- or even a few orders of magnitude above -- the lifetime of the source of the gravitational waves is the Hubble time, \begin{equation} \tau_\text{v} = \Hc^{-1} \ll \Tvisc(\Rbc). \end{equation} \subsection{Comparison to envelope approximation} \label{ss:ComEnvApp} In the envelope approximation, the relative energy density in gravitational waves is given by~\cite{Kamionkowski:1993fg,Huber:2008hg} \begin{equation} \label{e:EnvAppFor} \Omega_\text{GW}^\text{ea} \simeq \frac{0.11 \vw^3}{0.42+\vw^2} \left( \frac{\Hc}{\be}\right)^2 \frac{\kappa^2 \al^2}{(\al+1)^2}, \end{equation} where $\al$ is the ratio between the ``vacuum'' energy (defined below) and the radiation energy density in the symmetric phase, $\kappa$ is the efficiency with which vacuum energy is converted to kinetic energy, and $\be$ is the nucleation rate parameter also defined above. The vacuum energy $V_0$ is defined in Ref.~\cite{Kamionkowski:1993fg} from the trace anomaly, \begin{equation} \theta = \epsilon - 3p, \end{equation} as a quarter of the difference between the symmetric and broken phases: \begin{equation} V_0 = \frac14(\theta_\text{s} - \theta_\text{b}). \end{equation} In our convention, the trace anomaly vanishes in the symmetric phase, and in the broken phase is \begin{equation} \label{e:VacEne} \theta_{\text{b}} = -T \frac{d}{dT}V(\phi_\text{b},T) + 4V(\phi_\text{b},T), \end{equation} where $\phi_\text{b}$ is the value of $\phi$ in equilibrium in the broken phase at temperature $T$. In the conventions of \cite{Kamionkowski:1993fg}, the trace anomaly vanishes in the broken phase, and in the symmetric phase is equal and opposite to (\ref{e:VacEne}). Hence for our thermal potential (\ref{e:ScaPot}) the parameter $\al$ is \begin{equation} \al = \frac{V_0}{3aT^4} = \frac{1}{3aT^4}\left( \frac{1}{4}T \frac{d}{dT}V(\phi_\text{b},T) - V(\phi_\text{b},T) \right). \label{e:StrParB} \end{equation} The efficiency parameter is defined from the average fluid kinetic energy density (\ref{e:fluidVdef}) as \begin{equation} \kappa = \frac{1}{V_0} \frac{1}{{\mathcal V}} \int d^3 x \tau^\text{f}_{ii}. \end{equation} Therefore \begin{equation} \label{e:KappaEquiv} (1+w) \fluidV^2 = \frac{\kappa\al}{1+\al}. \end{equation} The factor of $1+\al$ in the denominator of the right hand side comes from the fact that we are dividing by the average total energy density in the symmetric phase, which is $3aT^4 + V_0$ in the conventions of \cite{Kamionkowski:1993fg}. Note that $\kappa\al$ is conventionally estimated analytically from the radial fluid velocity around an isolated expanding bubble $v(r,t)$, where $r$ is the distance from the centre of the bubble, and $t$ is the time since nucleation~\cite{Kamionkowski:1993fg,Espinosa:2010hh}. At large times, the radial fluid velocity is a function of a scaling variable $\xi = r/t$, rather than $r$ and $t$ separately. The ratio of the kinetic energy density to the total energy density can then be estimated as \begin{equation} \kappa\al = \frac{3}{\vw^3 \epsilon} \int d\xi \xi^2 (\epsilon + p) W^2 v^2(\xi). \end{equation} We will compare this estimate to the numerically obtained $(1+w) \fluidV^2$ in the results section, finding good agreement. In order to compare our expression for the gravitational wave energy density (\ref{e:OmgwEqn}) with the envelope approximation formula (\ref{e:EnvAppFor}), we estimate the fluid flow scale $L_\text{f}$ as the bubble separation scale $\Rbc$, which in turn is related to the nucleation rate parameter by Eq.\ (\ref{e:RbcBet}). Hence the ratio between the gravitational wave energy density generated acoustically and in the envelope approximation is\footnote{Note that for a deflagration, if tunnelling is suppressed behind the shock wave, the ratio is boosted by a factor $\sim \cs/\vw$ -- see the discussion after Eq.\ (\ref{e:RbcBet}).} \begin{equation} \frac{\Omega_\text{GW}}{\Omega_\text{GW}^\text{ea}} \simeq \frac{3(8\pi)^{\frac13}\tilde\Omega_\text{GW}}{0.11\vw^2(0.42+\vw^2)}\left({\be}\tau_\text{v}\right). \label{e:OmGWRat} \end{equation} Given that the ratio (\ref{e:OmGWRat}) is smallest for $\vw=1$, and that the lifetime of the sound waves is approximately $\Hc^{-1}$ (see Section \ref{ss:LifSouWav}), we can estimate that \begin{equation} \frac{\Omega_\text{GW}}{\Omega_\text{GW}^\text{ea}} \gtrsim 60 \tilde\Omega_\text{GW} \frac{\be}{\Hc}. \end{equation} We will see from our numerical simulations that $\tilde\Omega_\text{GW} \sim 0.04$. The ratio ${\be}/{\Hc}$ was discussed in Section~\ref{ss:BubNuc}, and shown to be at least $\mathrm{O}(10^2)$, and possibly significantly greater if there is only small supercooling. We conclude that the energy density in acoustically generated gravitational waves is at least two orders of magnitude greater than the envelope approximation suggests. \section{Numerical simulations} \label{sect:numerical} \subsection{Methods} Our numerical methods are a development of those first used in this context to study the case of isolated bubbles in Ref.~\cite{KurkiSuonio:1995vy}. In that paper a spherically symmetric bubble was assumed. Here we extend those simulations to a full 3+1-dimensional simulation volume. In addition, we couple the linearised stress-energy tensor to perturbations around a flat metric, to measure the gravitational wave power produced by the simulation. \subsubsection{Coupled field-fluid system} \begin{figure}[t] \begin{centering} \includegraphics{space.eps}\\ \vspace{0.5cm} \includegraphics{timespace.eps}\\ \caption{\label{fig:layout} Layout of quantities simulated. The positions of quantities related to simulating an ideal relativistic fluid are standard~\cite{WilsonMatthews}. Because the field and fluid are coupled together, it is important that the scalar field $\phi$ and its conjugate momentum $\pi$ are correctly centred. We take $\phi$ to reside in zones (like pressure, temperature, etc.), so that no centring is required to compute, for example, the equation of state.} \end{centering} \end{figure} The coupled hydro-scalar equations, outlined above, can be treated quite easily using standard numerical techniques. The scalar field is evolved with the leapfrog (Verlet) algorithm, while standard operator splitting methods are used for the fluid~\cite{WilsonMatthews}. These are equivalent to numerically integrating the equations of motion given above. Although the full details of how to implement relativistic hydrodynamics is beyond the scope of this paper, it is instructive to consider how the quantities are laid out on the lattice both in the spatial and temporal directions (see Fig.~\ref{fig:layout}). Furthermore, for good energy conservation it is essential that the discretised version of the damping term couple the field and fluid quantities at equal times during the simulation. We have tested the results of our simulations against changing timestep (as well as the lattice spacing); see the following section. As our simulations do not run for sufficiently long to develop strong shocks (indeed, we choose our lattice spacing parameters such that the fluid velocity profile is always resolved by several $\delta x$), the simulations presented in this paper do not involve any artificial viscosity. The importance of an artificial viscosity term was previously studied using 1+1-dimensional simulations of two colliding bubbles. \subsubsection{Metric perturbations} Our principal observables are the energy density and power spectrum of the gravitational waves. The goal of our simulations is to compute the power per unit logarithmic frequency interval in gravitational waves $d\rho_\text{GW}(k)/d\ln k$, and the total energy density $\rho_\text{GW}$. Perturbations of the metric are sourced by transverse-traceless part of the stress-energy tensor $\Pi_{ij}$ \begin{equation} \ddot{h}_{ij} - \nabla^2 h_{ij} = 16 \pi G \Pi_{ij}. \end{equation} Obtaining $\Pi_{ij}$ from $T_{\mu\nu}$ involves a projection in momentum space. Therefore, evolving $h_{ij}$ (whether in momentum space or position space) would involve Fourier transforms at each timestep. As we go to large volumes, the execution time of fast Fourier transforms (FFTs) scales as $\mathrm{O}(N \log N)$, while there are few optimised FFT codes that offer domain decomposition in more than one dimension. It is therefore vital that the number of steps requiring Fourier transforms be minimised, to yield a scalable simulation. Our approach is to evolve the unprojected equation of motion in real space~\cite{GarciaBellido:2007af} \begin{equation} \label{eq:unprojected} \ddot{u}_{ij} - \nabla^2 u_{ij} = 16 \pi G (\tau^\phi_{ij} + \tau^\text{f}_{ij}), \end{equation} where $u_{ij}$ is an auxiliary tensor and the sources are defined in Eq.~(\ref{e:TauDef}). Only when we wish to recover the metric perturbations $h_{ij}$ do we Fourier transform $u_{ij}$ and project out the transverse-traceless components through \begin{equation} h_{ij} (\mathbf{k}) = \lambda_{ij,lm} (\hat{\mathbf{k}}) u_{lm} (t,\mathbf{k}), \end{equation} where the projector is defined in Eq.~(\ref{e:ProDef}). We evolve Eq.~(\ref{eq:unprojected}) using a leapfrog algorithm in a similar manner to the scalar field. Note that we choose the units of the code such that the critical temperature $\Tc = 1$ and the gravitational constant $G=1$. \subsection{Tests} Our basic tests principally involve varying the lattice spacing and timestep independently, on simulations of a single bubble colliding with itself in a small periodic box. These allow us to test that the simulations perform accurately between length scales $1/\Rbc$ and $1/\ell$. Longer distances do not need to be tested, and in any case $\Rbc$ is set by the box size $L$ in these tests. \subsubsection{Changing the lattice spacing} We performed tests on the effect of changing the lattice spacing using the self-collision of a single bubble in a cubic box, in relatively modest volumes (with parameters given in the following section). We considered $\delta x = 0.5/\Tc$, $\delta x = 1/\Tc$, $\delta x = 2/\Tc$ and $\delta x = 4/\Tc$. We notice no significant difference between these choices until $\delta x = 4/\Tc$. \begin{figure}[t] \begin{centering} \includegraphics[width=0.4\textwidth,clip=true]{singlebubble.eps}\\ \caption{\label{fig:singlebubble} Single bubble test simulation, with the correlation length $\ell$ shown as an indication of the wall width. Only the fluid source is shown here; discretisation errors for the field source are the same or smaller. There is good agreement between $1/L$ and $1/\ell$, as desired.} \end{centering} \end{figure} It is worth mentioning that, even for an isolated bubble which would (in continuum) have vanishing quadrupole moment and hence not source gravitational waves, the lattice discretisation breaks the spherical symmetry and results in a small amount of gravitational wave production. This power goes to zero as $(\delta x)^4$ for both the field and the fluid sources. After collision, however, agreement is very good with relative differences of at most $7\%$ for $k \lesssim 1/\ell$ between $\delta x=1/\Tc$ and $\delta x=2/\Tc$ for the fluid source. Furthermore, at higher momenta there are only $\mathrm{O}(1)$ differences between these two choices, consistent across seven orders of magnitude. This is surprisingly good given the relatively coarse wall width and the complicated microphysics. Similarly, discrepancies between $\delta x = 0.5/\Tc$ and $\delta x = 1/\Tc$ at late times were at worst $2 \%$ for $k \lesssim 1/\ell$; see Figure~\ref{fig:singlebubble}. Discretisation errors were always less severe for the field source than for the fluid source. In summary, we note no significant sensitivity to lattice spacing so long as it is kept well below the scalar field wall width. While our previous work used simulations with $\delta x = 1/\Tc$, we use a lattice spacing of $\delta x = 2/\Tc$ in the present paper. The inferred discrepancies are demonstrably smaller than $10\%$, and the doubling of the accessible dynamic range that this allows is very useful. \subsubsection{Changing the timestep} With $\delta x=2/\Tc$ having been chosen, we varied the timestep to explore the effect of inaccuracies in our evolution algorithm. There is agreement at the $1\%$ level or better for all $k \lesssim 1/\ell$ and $5\%$ or better up to $k \sim 0.5$ (all points plotted on Fig.~\ref{fig:singlebubble}) as we varied $\delta t$ between $0.2/\Tc$, $0.1/\Tc$ and $0.05/\Tc$, in the same single-bubble tests for the fluid power spectrum described above. We use $\delta t=0.1$ for the remainder of the paper, although we could probably have achieved acceptable results with $\delta t=0.2$. In the present paper our simulation durations are typically the same order of magnitude as one light-crossing time, and rather less than one sound-crossing time. This means, in particular, that the production of gravitational radiation by acoustic waves (or by scalar radiation, which is in any case heavily damped) is not likely to be affected by signals propagating around-the-lattice. \subsection{Parameter choices} We use the same parameters for the potential as in our previous paper. The exact values of these parameters are not particularly important: it is the latent heat and the wall velocity which mainly determine the gravitational wave power spectrum. Our aim in the present paper is to develop the ideas underlying our previous paper as well as the spherical studies carried out earlier, and so we work with the same parameters as before. No attempt is made in the present paper to look at strong fluid flows or fast `runaway' bubble walls. We leave these harder topics for future work and instead seek to comprehensively explain the generation of gravitational waves by more gentle phase transitions ($\alpha_{T_\mathrm{N}} \lesssim 0.1$). We discuss bubble nucleation further in the next section but note that we nucleate all of our bubbles simultaneously in the present work. Given $\delta x =2/\Tc$ and a simulation size of $2400^3$ points, our physical simulation volume is $(4800/\Tc)^3$ for all the results presented in this paper. \begin{table} \begin{tabular}{l | c | c | c |} & Weak & Weak (scaled) & Intermediate \\ \hline $T_0/\Tc$ & $1/\sqrt{2}$ & $1/\sqrt{2}$ & $1/\sqrt{2}$ \\ $\gamma$ & $1/18$ & $4/18$ & $2/18$ \\ $A$ & $\sqrt{10}/72$ & $\sqrt{10}/9$ & $\sqrt{10}/72$ \\ $\lambda$ & $10/648$ & $160/648$ & $5/648$ \\ ${\cal L}/T_c^4$ & $9/40$ & $9/40$ & $9/5$ \\ $\sigma/T_c^3$ & $1/10$ & $1/20$ & $4\sqrt{2}/10$ \\ $\ell \Tc$ & $6$ & $3$ & $6/\sqrt{2}$ \\ $\TN/\Tc$ & $0.86$ & $0.86$ & $0.8$\\ $\al_{\TN}$ & $0.010$ & $0.010$ & $0.084$ \\ $\al$ & $0.0046$ & $0.0046$ & $0.050$ \\ $\Rc \Tc$ & $ 16$ & $8.1$ & $8.6$ \\ \hline \end{tabular} \caption{Scalar potential parameters (\ref{e:ScaPot}), nucleation temperature $\TN$, phase transition parameters (\ref{e:PhaTraPar}), transition strength parameters (\ref{e:StrParA}) and (\ref{e:StrParB}), and critical bubble radii (\ref{eq:critbub}) for our simulations.} \label{t:SimParsPot} \end{table} \subsection{Initial conditions} \label{sec:initconds} At the start of our simulation, we nucleate a controllable number of bubbles, which was usually $\mathrm{O}(1000)$ (yielding bubbles of average collision radius slightly larger than in Ref.~\cite{Hindmarsh:2013xza}), but was as small as 37 or in one case as large as 32558. These have a Gaussian scalar field profile. This profile is initially at rest, meaning that the conjugate momentum, and also the fluid velocity are zero in the vicinity of the bubble. We ensure that all the initially nucleated bubbles are well separated at the start of the simulation. For runs with the same number of bubbles but different wall velocities, all bubbles are nucleated at the same positions, but from testing we found that even 37 bubbles was enough to remove any discernible dependence on the initial bubble positions. The critical bubble radius can be computed from the surface tension $\sigma$ and the difference in potential energy at $\TN$ from the thin-wall formula (noting that the potential energy in the symmetric phase is zero) \begin{equation} \label{eq:critbub} \Rc = \frac{2\sigma}{- V(\phi_b,\TN)}. \end{equation} Values of $\Rc$ for our simulations are shown in Table~\ref{t:SimParsPot}. Rather than find the critical bubble profile exactly, we use a spherically symmetric Gaussian field profile \begin{equation} \phi(r) = \phi_\text{b} \exp(-r^2/2\Rc). \end{equation} This is rather broad, and therefore sufficiently large compared to the true critical bubble profile to ensure that the bubbles reliably expand despite lattice effects. The bubbles are sufficiently large that they immediately start growing, driven by the pressure difference between the interior and the exterior. The scalar field quickly settles into a kink-like configuration, interpolating between the metastable and stable minima over a distance of order $\ell$, the correlation length of the scalar field (see Table~\ref{t:SimParsPot} for the values the correlation length takes). For the scalar field dynamics to be valid we must have a lattice spacing that resolves the wall width (see previous section), which places an upper limit on the physical simulation volume possible for a given amount of computer memory. In this paper, the bubbles are nucleated simultaneously. Nucleating at a single time helps to ensure clear scale separation in the limited dynamic range available to our numerical simulations, although it does produce oscillatory patterns in the resulting power spectrum (we cover the case of unequal nucleation times in Appendix~\ref{a:BubNucTim}). We could in principle recover the power spectrum produced by bubbles of all different sizes by a linear superposition of the resulting power spectra, weighted by the bubble size distribution. Once nucleated, the bubbles grow, and the fluid approaches a characteristic radial velocity distribution, which is a function of $\xi=r/t$, where $r$ is the distance from the centre of the bubble, and $t$ is the time since nucleation (see Fig.~\ref{fig:profilecomparison}). The form of this function depends on the bubble wall velocity~\cite{KurkiSuonio:1995vy}, and we will refer to it as the scaling profile. The rate of approach to the fluid scaling profile is generally much slower than the relaxation of the scalar field. In a true electroweak phase transition the bubble size at collision is many orders of magnitude larger than the bubble size at nucleation, giving a lot of time for the radial velocity distribution to reach its asymptotic profile. In our numerical simulations, the ratio of the bubble size at collision, $\Rbc$ to the bubble size at nucleation ($\approx \Rc$) is at most 90 for $N_\text{b}=37$, $\ell=16$ and as small as 9.4 for $N_\text{b}=32558$, $\ell=16$ (our simulation parameters are outlined in Tables \ref{t:SimParsPot} and \ref{t:SimParsRuns}). We should therefore be alert to the fact that the fluid has definitely not settled down to its final scaling profile in a collision. One can test for the effect of a non-scaling fluid profile by repeating simulations with fewer bubbles, so that there is a longer time before collision. We have carried out simulations such that $\Rb$ varies by around a factor of three in the two sets of simulations for which we present plots, and by as much as a factor of ten in our full set of simulations for this paper. \begin{figure} \begin{centering} \includegraphics[width=0.4\textwidth,clip=true]{profilecomp.eps} \caption{\label{fig:profilecomparison} Comparison of radial fluid velocity profiles for simulations at approximate collision times for $N_\text{b}=1000$ ($t=500/\Tc$; gray), $N_\text{b}=37$ ($t=1000/\Tc$; black) and at late times (dashed red). } \end{centering} \end{figure} \begin{table} \begin{tabular}{cccrcccc} Type & $\eta/T_c$ & $\vw$ & $N_\text{b}$ & $\fluidVmax$ & $\fluidVmaxperp$ & $\xi_\text{f, end} T_c$ & $8\pi\tilde\Omega_\text{GW}$ \\ \hline Weak & 0.06 & 0.83 & 988 & 0.0052 & 0.00037 & 351 & 0.88 \\ & & & 125 & 0.0052 & 0.00028 & 649 & 0.84 \\ \cline{2-8} & 0.1 & 0.68 & 988 & 0.0084 & 0.00036 & 244 & 0.73 \\ & & & 125 & 0.0082 & 0.00026 & 451 & 0.71 \\ & & & 37 & 0.0080 & 0.00021 & 644 & 0.60 \\ \cline{2-8} & 0.121 & 0.59 & 988 & 0.0116 & 0.00052 & 182 & 0.69 \\ \cline{2-8} & 0.15 & 0.54 & 988 & 0.0102 & 0.00037 & 230 & 0.54 \\ & & & 37 & 0.0120 & 0.00025 & 428 & 0.80 \\ \cline{2-8} & 0.2 & 0.44 & 32558 & 0.0059 & 0.00047 & 136 & 0.97 \\ & & & 988 & 0.0073 & 0.00031 & 368 & 0.70 \\ & & & 125 & 0.0075 & 0.00023 & 613 & 0.86 \\ & & & 37 & 0.0078 & 0.00019 & 942 & 0.70 \\ \cline{2-8} & 0.4 & 0.24 & 988 & 0.0036 & 0.00049 & 756 & 0.86 \\ \hline Wk. (sc.) & 0.4 & 0.44 & 988 & 0.0075 & 0.00029 & 365 & 0.81 \\ \hline Interm. & 0.4 & 0.44 & 988 & 0.0595 & 0.00328 & 485 & 1.04 \\ \hline \end{tabular} \caption{\label{t:SimParsRuns} Simulation parameters $\eta$ (field-fluid coupling), $N_\text{b}$ (number of bubbles nucleated), with the resulting bubble wall speed $\vw$, the maximum fluid RMS velocity $\fluidVmax$, the maximum contribution of transverse fluid motion $\fluidVmaxperp$, the integral scale of the fluid $\xi_\text{f, end}$, and the scaled slope parameter for the growth of the gravitational wave energy density $\tilde\Omega_\text{GW}$. The potential parameters and derived quantities for each type -- ``weak'', ``weak scaled'' and ``intermediate'' -- are given in Table~\ref{t:SimParsPot}.} \end{table} \subsection{Scaling} A cosmological first order phase transition is a multiscale problem, with length scales varying from the microscopic ($1/T$, bubble wall thickness) up to the Hubble scale, a range spanning 17 orders of magnitude at the electroweak scale. The typical bubble sizes at collision time are somewhere between these scales, depending on the metastability of the high temperature phase. It is of course impossible to include all of these scales in a single numerical simulation, where scale hierarchies only of order $10^2$ are achievable. To obtain a stable numerical description of the bubble wall, the wall thickness has to span a few lattice units (denoted by $\delta x$). In order to have collisions within the simulation volume, this restricts the bubble separation to be unphysically small. However, this restriction can be relaxed, at least partly: we expect that the dynamics of the bubble growth, collisions and the subsequent generation of gravitational waves are mostly determined by the ``bulk'' thermodynamics ($\epsilon$, $p$, latent heat ${\cal L}$) and the friction parameter $\eta$, but not by microscopic details of the bubble wall (surface tension $\sigma$, wall thickness $\ell$). Dimensionally, it is clear that the contribution from quantities proportional to bubble volume (e.g. latent heat) will dominate over quantities proportional to the area of the bubbles when the bubble radius is large enough. This motivates us to search for a way to modify the equations of motion (\ref{eqfield})--(\ref{eqZ}) so that we could simulate bubbles which are significantly larger than the microscopic length scale, while preserving the bulk thermodynamics of bubble expansion while possibly sacrificing the properties of the bubble wall. Indeed, this can be achieved with the following simple rescaling of the parameters and fields: \begin{equation} \begin{array}{l} \gamma \rightarrow r^2 \gamma, ~~ A \rightarrow r^3 A, ~~ \lambda \rightarrow r^4 \lambda, ~~ \eta \rightarrow r \eta, \\ \phi(x) \rightarrow r^{-1}\phi(rx), ~~ V^i(x) \rightarrow V^i(rx),\\ E(x) \rightarrow E(rx),~~~~~ Z_i(x) \rightarrow Z_i(rx), \end{array} \label{scaling} \end{equation} Here $r$ is a dimensionless scaling factor, and $x=(\textbf{x},t)$. Clearly, the equations of motion (\ref{eqfield})--(\ref{eqZ}) remain valid. The crucial feature of the scaling is that the potential remains constant, $V(\phi,T) = V_{\rm scaled}(r^{-1}\phi,T)$, indicating that the bulk quantities $T_c$, ${\cal L}$ and also $\epsilon(T,\phi)$ and $p(T,\phi)$ remain invariant, as desired. However, the surface tension and wall thickness scale as $\sigma\rightarrow r^{-1}\sigma$ and $\ell\rightarrow r^{-1}\ell$. In effect the scaling stretches the field configuration by a factor of $r^{-1}$ in spatial and temporal directions. We note that in spite of the non-trivial scaling of the friction parameter $\eta$, the total frictional force imparted on the moving bubble wall does not change: it is obtained by integrating the $\eta$-terms in Eqs.~(\ref{eqfield})--(\ref{eqZ}) over the bubble wall thickness, which is scaled by a factor of $r^{-1}$. What does the rescaling gain us? It is straightforward to see that the lattice implementation of the equations of motion (\ref{eqfield})--(\ref{eqZ}) do not change (in lattice units) under scaling (\ref{scaling}), provided that the lattice spacing is also scaled as $\delta x \rightarrow r^{-1} \, \delta x$. This implies that a single lattice simulation exactly corresponds to a whole family of results, given by the scaling with $r$. All of them have the same bulk thermodynamical properties. Thus, provided that the detailed bubble wall properties are not important for bubble collisions and gravitational wave generation, we can take a simulation run where bubbles have been nucleated at specific locations, and rescale it to the desired physical bubble separation scale. We can test the assumption that the surface properties are not important by comparing results from simulations which differ only in surface tension and wall thickness. This can be achieved by applying the scaling (\ref{scaling}) to the parameters of the theory, but leaving the lattice spacing constant. A set of parameters for unscaled ($r=1$) and scaled ($r=2$) runs are shown in Table \ref{t:SimParsPot}. The surface tension and the bubble wall thickness have been halved in the scaled simulation. The results of the test are shown in Table \ref{t:SimParsRuns}; here the scaled run is done with $\eta/\Tc=0.4$ and 988 bubbles, which can be compared with unscaled $\eta/\Tc=0.2$, 988 bubble results. In both simulations the bubbles are nucleated at identical times and locations. The numerical results match well within uncertainties of the measurements, supporting our assumption that the surface properties of the scalar field profile are unimportant. \section{Numerical results} \label{s:NumRes} \begin{figure*}[t] \begin{centering} \includegraphics[height=0.29\textwidth]{slice0500.eps} \hfill \includegraphics[height=0.29\textwidth]{slice1000.eps} \hfill \includegraphics[height=0.29\textwidth]{slice1500.eps} \hfill \includegraphics[height=0.29\textwidth]{legend.eps} \caption{\label{fig:slice} Slices of fluid kinetic energy density $E/\Tc^4$ at $t=500\, T_\mathrm{c}^{-1}$, $t=1000 \,T_\mathrm{c}^{-1}$ and $t=1500\, T_\mathrm{c}^{-1}$ respectively, for the $\eta/T_c=0.15$, $N_\text{b}=988$ simulation.} \end{centering} \end{figure*} In this section we present the main results from a campaign of numerical simulations, whose parameters are given in Table \ref{t:SimParsRuns}. As mentioned before, our simulations were carried out in a volume $(4800/\Tc)^3$. A set of representative slices through the simulation are shown in Fig.~\ref{fig:slice}. Our main results are derived from the set of simulations with latent heat to thermal energy ratio $\al_{\TN}\simeq 0.01$, which we characterise as a ``weak'' transition. Values of the friction parameter $\eta$ were chosen to give bubble growth proceeding by both detonation and deflagration, as well as one simulation tuned to the Jouguet case, where the bubble wall moves at the speed of sound. We have one ``intermediate'' strength transition, with $\strengthPar{\TN} \simeq 0.1$ where $\eta$ is chosen to give the same wall speed as the weak transition with $\eta/\Tc = 0.2$. The ``weak scaled'' transition (employing the scaling of the previous section) is discussed later. When plotting graphs, we focus on three representative cases, where the field-fluid coupling is $\eta/T_c=0.1$, $\eta/T_c=0.15$, and $\eta/T_c=0.2$, and the bubble wall speed is supersonic ($\vw = 0.83$), just subsonic ($\vw = 0.54$), and subsonic ($\vw = 0.44$). A complete set of graphs can be found in the supplementary material~\cite{supplementary}. Our understanding of the transition developed in Section \ref{s:TheGWGen} shows that the important quantities for the overall gravitational wave energy density are the RMS fluid velocity $\fluidV$ and the fluid velocity scale $L_\text{f}$, and that the gravitational wave power spectrum is only indirectly dependent on the strength of the transition and the parameters in the potential. Indeed, the gravitational wave power spectrum should be the same for parameters which give the same $\al_{\TN}\simeq 0.01$ (keeping the wall velocity constant). We can use the ``weak scaled'' run of Table \ref{t:SimParsRuns} to test this statement, where $\al_{\TN}$ is constant but the scalar bubble wall thickness is halved. We test the effect of the strength of the transition with the ``intermediate'' run of Table \ref{t:SimParsRuns}. We track the progress of the transition through the time evolution of the two quantities $\fieldV$ and $\fluidV$ defined in Eqs.\ (\ref{e:fluidVdef}),~(\ref{e:fieldVdef}). We recall that the squares of these quantities give an estimate of the size of the shear stresses of the field and the fluid relative to the background fluid enthalpy density, and that the $\fluidV$ tends to the RMS\ fluid velocity for $\fluidV \ll 1$. We also note that the fraction of the fluid velocity power coming from rotational modes, $\fluidVperp$, is very small leading us to conclude that rotational fluid modes are not important in this system; we discuss this in more detail in Appendix~\ref{a:TraVelNeg}. \begin{figure} \begin{centering} \includegraphics[width=0.4\textwidth,clip=true]{dimless-1000.eps} \includegraphics[width=0.4\textwidth,clip=true]{dimless-37.eps} \caption{Root mean square fluid velocity $\fluidV$ and root mean square scalar gradients $\fieldV$ for $N_\text{b} = 988$ (top row) and $N_\text{b} = 37$ (bottom row). } \label{fig:timevolV} \end{centering} \end{figure} We see from Fig.~\ref{fig:timevolV} that $\fieldV$ grows and decays with the total surface area of the bubbles of the new phase, while the mean fluid velocity grows with the volume of the bubbles, and then stays constant once the bubbles have merged\footnote{We have no explicit viscosity, and the slight decreasing trend in some measurements of $\fluidV$ arises from the well-known numerical viscosity of donor-cell advection, $\nu_\text{num} \simeq \fluidV \delta x$.}. This allows us to identify distinct phases of the transition: the collision phase, where $\fieldV$ grows and decays; and the subsequent acoustic phase where $\fluidV$ is approximately constant, and $\fieldV$ vanishes. \subsection{Length scales} The analysis of Section \ref{s:TheGWGen} shows that the length scale of the velocity flow is an important determinant of the gravitational wave power spectrum. In Fig.\ \ref{fig:integralscale} we show the integral scales for the velocity and the gravitational radiation for runs at $N_\text{b}=37$ and $N_\text{b}=988$. During the collision phase, the bubbles expand and overlap, and hence the scales of the velocity field and the resulting gravitational radiation grow linearly in time. The gravitational radiation length scale is 2-3 times that of the velocity field. The scale of the velocity field stops growing as the bubbles collide and the scalar field decays to the vacuum, and stays constant during the acoustic phase. The scale imprinted on the gravitational radiation during the acoustic phase is close to that of the velocity field. \begin{figure}[t] \begin{centering} \includegraphics[width=0.4\textwidth,clip=true]{integralscale-1000.eps} \includegraphics[width=0.4\textwidth,clip=true]{integralscale-37.eps} \caption{\label{fig:integralscale} Plot of integral scales $\xi_\text{f}$ and $\xi_\text{gw}$ associated with the fluid and gravitational wave power spectrum for simulations of interest for $N_\text{b} = 988$ (top) and $N_\text{b} = 37$ (bottom). Note the different $y$-axis plotting scales.} \end{centering} \end{figure} \subsection{Velocity profile} Given the discussions on initial bubble sizes in Section~\ref{sec:initconds}, it is important to bear in mind that the bubbles in our simulations expand in size by a factor of only around 10-100, which is many orders of magnitude less than in a real phase transition. One practical effect is that the profile of the velocity field around the bubbles does not reach its asymptotic scaling form, which can be expressed in terms of the previously introduced ratio $\xi = r/t$. In Fig.\ \ref{fig:profilecomparison} we showed the velocity profiles for the weak transition at $\eta/T_c=0.1$, $\eta/T_c=0.15$, and $\eta/T_c=0.2$, after times $t = 500/\Tc$ and $t = 1000/\Tc$. These are approximately when most bubble collisions are happening, in the $N_\text{b} = 1000$ and $N_\text{b} = 37$ runs respectively. We see that, at collision, the velocity profiles are qualitatively similar to their asymptotic forms in amplitude and shape, but differ in detail. In particular, the peak velocities are lower. This is particularly noticeable at the earlier time. We would therefore expect the RMS velocities $\fluidV$ measured in the simulations to be underestimates. As the gravitational wave power spectrum depends on the fourth power of $\fluidV$, this is a significant source of uncertainty in deriving accurate predictions for the gravitational wave power spectrum. \begin{table}[h!] \begin{tabular}{ccccc} $\eta/T_c$ & $\vw$ & $\fluidV$ & $\sqrt{\frac{3}{4}\kappa_\text{1d}\al}$ & $\sqrt{\frac{3}{4}\kappa_\text{Esp}\al}$ \\ \hline 0.06 & 0.83 & 0.0052 & 0.0056 & 0.0063 \\ 0.1 & 0.68 & 0.0084 & 0.0085 & 0.0121 \\ 0.121 & 0.59 & 0.0116 & 0.0146 & 0.0192 \\ 0.15 & 0.54 & 0.0102 & 0.0103 & 0.0100 \\ 0.2 & 0.44 & 0.0073 & 0.0066 & 0.0065 \\ 0.4 & 0.24 & 0.0036 & 0.0033 & 0.0036 \\ \hline \end{tabular} \caption{\label{tab:tablethree} Simulation parameters $\eta$ (field-fluid coupling), with the resulting bubble wall speed $\vw$, fluid RMS velocity $\fluidV$, for weak transitions with $N_\text{b}=988$, and the equivalent quantity $\sqrt{4\kappa\al/3}$ appearing in the envelope approximation (see Eq.\ \ref{e:KappaEquiv}). The efficiency parameter $\kappa$ is estimated in two ways: $\kappa_\text{1d}\al$ is estimated from the numerical spherically symmetric 1D fluid profiles at $t=1000/\Tc$, while $\kappa_\text{Esp}$ comes from the function $\kappa(\vw,\alpha)$ given in the Appendix of Ref.~\cite{Espinosa:2010hh}, using $\vw$ extracted from spherical 1D simulations at $t=1000/\Tc$. } \end{table} These considerations are tested in Table \ref{tab:tablethree}, where we compare our mean square fluid velocity parameter $\fluidV$ with $\sqrt{\frac{3}{4}\kappa\al}$, which should be equal according to the discussion in Section \ref{ss:ComEnvApp}. In the table, the efficiency parameter $\kappa$ is estimated in two ways: $\kappa_\text{1d}\al$ is estimated from integrating the numerical 1D fluid profiles at $t=1000/\Tc$, while $\kappa_\text{Esp}$ comes from the function $\kappa(\vw,\alpha)$ given in the Appendix of Ref.\ \cite{Espinosa:2010hh}, using $\vw$ extracted from 1D simulations at $t=1000/\Tc$. As can be seen, $\fluidV$ from the 3D simulations compares reasonably well to its estimate extracted from the 1D numerical profiles around the time of bubble collision, while the theoretical values are somewhat higher. It is remarkable that such a simple model for the mean square velocity, which omits all details of the bubble collisions, does so well. \subsection{Power spectra} In Figs.\ \ref{f:VelPSMulti} and \ref{f:GWPSMulti} we show velocity and gravitational wave power spectra at various times through the simulations, for weak transitions with $\eta/T_c=0.1$, $\eta/T_c=0.15$, and $\eta/T_c=0.2$, where the bubble wall speed is supersonic ($\vw = 0.83$), just subsonic ($\vw = 0.54$), and subsonic ($\vw = 0.44$). The same potential and fluid-field parameters are run with $N_\text{b}=988$ and $N_\text{b}=37$ bubbles, to show the effect of allowing a greater time for the fluid velocity around the expanding bubbles to approach their scaling profiles. The power spectra develop in characteristic ways in the different phases of the transition, and one can see that if the simulation is stopped too early, a misleading impression of the power spectrum will be obtained. \begin{figure*}[tb] \begin{centering} \includegraphics[width=0.325\textwidth,clip=true]{velps-0.1-1000.eps} \hfill \includegraphics[width=0.325\textwidth,clip=true]{velps-0.15-1000.eps} \hfill \includegraphics[width=0.325\textwidth,clip=true]{velps-0.2-1000.eps}\\ \vspace{2ex} \includegraphics[width=0.325\textwidth,clip=true]{velps-0.1-37.eps} \hfill \includegraphics[width=0.325\textwidth,clip=true]{velps-0.15-37.eps} \hfill \includegraphics[width=0.325\textwidth,clip=true]{velps-0.2-37.eps}\\ \caption{Velocity power spectra, for weak transitions, at $\eta/T_c = 0.1$, $0.15$ and $0.2$ ($\vw = 0.83$, $0.54$ and $0.44$) for $N_\text{b} = 988$ (top row) and $N_\text{b}=37$ (bottom row). The large oscillations are due to all the bubbles being nucleated at exactly the same time. As in Fig.~\ref{f:GWPSMulti}, we note that the scales are standardised for all the plots, but that the phase transition has not necessarily finished by $2500/Tc$, the time of the latest curve.} \label{f:VelPSMulti} \end{centering} \end{figure*} \begin{figure*}[tb] \begin{centering} \includegraphics[width=0.325\textwidth,clip=true]{gwps-0.1-1000.eps} \hfill \includegraphics[width=0.325\textwidth,clip=true]{gwps-0.15-1000.eps} \hfill \includegraphics[width=0.325\textwidth,clip=true]{gwps-0.2-1000.eps}\\ \vspace{2ex} \includegraphics[width=0.325\textwidth,clip=true]{gwps-0.1-37.eps} \hfill \includegraphics[width=0.325\textwidth,clip=true]{gwps-0.15-37.eps} \hfill \includegraphics[width=0.325\textwidth,clip=true]{gwps-0.2-37.eps}\\ \caption{Gravitational wave power spectra, for weak transitions, at $\eta/T_c = 0.1$, $0.15$ and $0.2$ ($\vw = 0.83$, $0.54$ and $0.44$) for $N_\text{b} = 988$ (top row) and $N_\text{b}=37$ (bottom row). Note that the axes and time intervals are the same for all plots, which means that in some cases the latest ($2500/\Tc$) curve is from before the completion of the phase transition. } \label{f:GWPSMulti} \end{centering} \end{figure*} \subsubsection{Collision phase} Looking first at the velocity power spectra, the most striking feature is their periodic modulation. This is not a physical feature, and is due to the bubbles being nucleated all at the same time. We have checked that spreading the nucleation times reduces this modulation, and it is not expected to be a feature of the velocity power spectrum of a realistic bubble nucleation distribution in the infinite volume limit. In Appendix \ref{a:BubNucTim} we show the effect of allowing nucleation over a time of about $200/\Tc$. Once the fluid shells of the nearest pair of bubbles begin to overlap, gravitational waves are generated, at a scale controlled by the size of the bubbles. The overlap of the fluid shells is quickly followed by the collisions of the bubble walls, and gravitational radiation is generated by the scalar field as well. The bubbles continue to grow and to collide, and as a result the length scales of the velocity field and the gravitational radiation get larger (see Fig.~\ref{fig:integralscale}). This effect can be seen in the power spectra, where the curves show a peak moving up and to the left with time. In our simulations there is generally more energy in the scalar field than in the fluid to begin with, and so the gravitational radiation from the scalar field dominates the early phases (see Fig.~\ref{f:FluVsAll}). However, when scaled to a real deflagration or detonation in the early universe, most of the latent heat of the transition goes to the fluid, and the radiation from the scalar field can be neglected. It is only in the case of a runaway bubble wall that the scalar field takes most of the latent heat. We discuss the scaling to real transitions in Section \ref{ss:Extrapolate}, and we plan to study runaway transitions elsewhere. \begin{figure} \begin{centering} \includegraphics[width=0.4\textwidth,clip=true]{fieldfluidcomparison.eps} \caption{Power spectra for $\eta/T_c=0.2$, comparing fluid-only (dashed) and total (solid) GW power at intervals of $500/\Tc$. The power laws visible in the `total GW power' case are dominated by the gradient energy of the scalar field. This source, however, is short-lived. We conjecture that it can be calculated by means of the envelope approximation.} \label{f:FluVsAll} \end{centering} \end{figure} However, it is interesting to study the difference between a fluid-only gravitational wave power spectrum, and one sourced by both fluid and field (Fig.~\ref{f:FluVsAll}). There one sees evidence of a $k^{-1}$ power spectrum in arising from the scalar field during the collision phase (solid lines), which is later dominated by the gravitational waves from the fluid. As the scalar field energy density is confined to a thin shell, it is reasonable to suppose that its contribution can be adequately computed in the envelope approximation in the collision phase. We will investigate this conjecture elsewhere. \subsubsection{Acoustic phase} Eventually, the low-temperature phase spreads throughout the volume, the scalar field domain walls disappear, and fluid velocity perturbations are left behind. We call this the acoustic phase of the transition, as the fluid perturbations are primarily compressive (longitudinal) modes (see Appendix \ref{a:TraVelNeg}). During the acoustic phase, the length scale of the fluid perturbations and the gravitational waves remains constant. For the simulations with fewer bubbles ($N_\text{b}=37$), we see from the lower row of Fig.~\ref{f:VelPSMulti} that the envelope of the velocity power spectrum has an approximate power-law envelope beyond the peak. This power-law envelope is also visible at $N_\text{b} = 988$ at $\eta/\Tc=0.2$, where the bubble wall speed is lower. In both cases at $\eta/\Tc=0.2$, the bubbles expand longer before collision, and we expect the velocity field to be closer to the asymptotic form. We note that the power law is approximately $k^{-1}$ at $\eta/\Tc=0.2$, and appears steeper for lower couplings. However, we are not confident that we have reached the asymptotic form for bubble wall speeds above $\vw = 0.44$. At low wavenumbers, the velocity power spectrum behaves as a power of $k$, and arguments based on the analyticity properties of the Fourier transform of a longitudinal vector field in Ref.~\cite{Caprini:2007xq} show that it should go as $k^5$. This is just visible in the first few bins of the simulations with $N_\text{b}=988$. Larger simulations are required to properly check the long-wavelength behaviour. We see that the gravitational wave power spectrum grows linearly with time in the acoustic phase, maintaining its shape, except at the lowest wavenumbers. A power-law behaviour can be seen emerging beyond the peak, especially in the simulations with $N_\text{b} = 37$. The power-law is approximately $k^{-3}$ for the weak deflagrations at $\eta/\Tc=0.2$ and $\eta/\Tc=0.4$ (the power spectra for the latter can be found in the supplementary material for this work) for both $N_\text{b} = 988$ and $N_\text{b} = 37$, which gives us confidence that we are close to the true power law. However, a power-law can be seen only for $N_\text{b} = 37$ for $\eta \le 0.15$. Without further simulations at larger $\Rbc$ we cannot properly determine the long-wavelength behaviour of the growing acoustic phase power spectrum in these cases. \begin{figure}[t] \begin{centering} \includegraphics[width=0.4\textwidth,clip=true]{samerate.eps}\\ \caption{\label{fig:timevolGW} Time series of $\rho_\text{GW} L_{\mathrm{f}}^{-1} [(\bar \epsilon+ \bar p)^{-2}\fluidV^{-4}]$, showing the evolution of the gravitational wave energy density relative to an estimate of the square of the final fluid shear stresses. We take the fluid length scale $L_\mathrm{f}$ to be the integral scale $\xi_\mathrm{f}$. Some oscillation about the constant curve is caused by long-wavelength sloshing of the fluid or the infrared behaviour of the gravitational wave power, discussed later, but the striking feature is the scalable linearity of the signal across a factor of three for $R_*$. Only fluid contributions to the gravitational wave power are included here. The early-times steep growth is best explained by the violent behaviour when the two shocks overlap. This phase is not well explained by our random velocity field model. } \end{centering} \end{figure} We argued earlier that the gravitational wave density parameter $\Omega_\text{GW} = \rho_\text{GW}/\bar\epsilon$ is proportional to $L_\text{f}$, the fluid velocity length scale, and the square of the volume-averaged fluid energy density $(\bar\epsilon + \bar p)^2 {\overline U}_\mathrm{f}^4$. We plot the scaled gravitational wave energy density in Figure~\ref{fig:timevolGW}. This plot shows nicely parallel, linear growth of gravitational wave power when rescaled by these quantities at late times. The coincidence of the slopes is greatly improved over the equivalent figure in Ref.~\cite{Hindmarsh:2013xza}, thanks to the larger simulation volumes, longer run times, and above all the replacement of the average bubble separation at collision by the fluid integral scale. The improved coincidence of the slopes is one of the major results of the paper. It establishes the existence of an $\mathrm{O}(1)$ parameter $8\pi\tilde\Omega_\text{GW}$ for a wide range of relevant transitions, and shows that the gravitational wave energy density from a phase transition can be understood in terms of simple features of the velocity field created by the dynamics of the bubble collision. \subsection{Extrapolating to a real phase transition} \label{ss:Extrapolate} Our simulations are necessarily limited in volume, duration, and resolution. We now discuss how they can be extrapolated to the real universe. In particular we would like to extrapolate the gravitational radiation power spectrum, expressed as a fraction of the critical density. There are three physical length scales in the system: the average bubble separation $\Rbc$, the size of the initial bubble of the broken phase $\Rc$, and the bubble wall width $\ell$. They are all set by the dimensional scale of the effective potential, which one can chose to be the critical temperature $\Tc$, and various combinations of the dimensionless couplings $\gamma$, $A$ and $\lambda$. In a real transition, the average bubble separation is much larger than the wall width because of the exponential factor in the tunnelling rate, whose argument is set by the ratio of the energy of the critical bubble (see Ref.~\cite{Kirzhnits:1976ts}) to the critical temperature. This is generally a large number. There are also two physical time scales to consider: the lifetime of the fluid flow $\tau_\text{v}$, and the duration of the phase transition, which is of order $\be^{-1}$, the inverse of the tunnelling rate parameter~(\ref{e:TunRat}). The duration of the phase transition is also of order $\Rbc/\vw$, the time it takes for bubbles of average separation to collide. Finally, there are also scales set by the background cosmology: the Hubble rate at the phase transition $\Hc$ (and the Hubble length), and the gravitational constant $G$. The Hubble rate, the gravitational constant and the critical temperature $\Tc$ are related via the Friedmann equation. Our simulations are performed in a Minkowski background, as the duration of the transition is assumed to be comparable to the Hubble time. Therefore $\Tc$ and $G$ can be chosen independently. The role of $G$ is purely to set the scale of of the gravitational perturbations. As mentioned earlier, we use units $\Tc = 1$ and $G=1$. The observable of interest is the gravitational wave power spectrum, expressed as a fraction of the total density~(\ref{e:GWPowSpe}). It is clear from that formula that the relevant scales are the fluid flow length scale (set by the average bubble separation), the fluid flow lifetime, and the Hubble rate. The power spectrum is determined by the ratios of the fluid flow lifetime to the Hubble time, and the fluid flow scale to the Hubble scale. The role of the bubble wall width is to provide a short-distance cut-off on the power spectrum. A physical transition has $\ell < \Rc \ll \Rbc$, with $\Hc\Rbc$ of order $10^{-2}$ at the electroweak scale. Our simulations assume that the bubble separation is much less than the Hubble length, and that the transition rate is much larger than the Hubble rate, so that expansion can be neglected. The fluid flow lifetime affects only the amplitude of the acoustically generated gravitational waves. In our simulations one sees that after the transition has completed, the power spectrum grows linearly with time while maintaining its shape. Hence, apart from a trivial scaling, the relevant parameter for the gravitational wave power spectrum is the fluid flow scale, provided that the wall width and the critical bubble size are much less that the bubble separation. The effect of too large a ratio $\ell/\Rbc$ is that there is insufficient dynamic range to observe the power law behaviour of the power spectrum; the effect of too large a ratio $\Rc/\Rbc$ is that there is insufficient time for the fluid flow to approach its asymptotic self-similar profile, which results in too low a value for $\fluidV$. It also tends to obscure the power law behaviour. We have seen in our simulations that the ratio $\ell/\Rbc$ needs to be of order $10^{-3}$ in order to reliably distinguish the power law. Given our computing resources, this means we are not able to determine the shape of the power spectrum at wave numbers much less than the peak. In order to test the approach to physical ratios we should explore a scaling of the parameters which shrinks the ratios $\ell/\Rbc$ and $\Rc/\Rbc$ to zero. Such a scaling was given in Eq.~(\ref{scaling}). Its only effect is to alter the width and surface tension of the bubble wall, and hence shrink the size of the critical bubble and the bubble wall width independently of the bubble separation. We carried out a simulation scaled with $r=2$ (so that the bubble wall was half the width) and parameters given in Table \ref{t:SimParsRuns} corresponding to the deflagration, and compare the resulting gravitational wave power spectra in Figure~\ref{fig:scaledcomp}. The power spectra are substantially similar, but the $k^{-3}$ power law is clearer in the scaled run where $\ell/\Rbc$ is smaller. This is consistent with our discussion above, and lends further confidence to our identification of the index of the power law in this case. \begin{figure} \begin{centering} \includegraphics[width=0.4\textwidth,clip=true]{scaledcomp.eps} \caption{Power spectra with $N_\text{b}=988$ comparing the weak phase transition parameters and friction $\eta/T_c=0.2$ with the results from an equivalent run with the scaled parameters. For clarity, only the power spectra at the end of the phase transition ($t=2500/\Tc$) are shown. Note the $y$-axis scale is different to that used in Fig.~\ref{f:GWPSMulti}, in order to highlight the differences between the two power spectra.} \label{fig:scaledcomp} \end{centering} \end{figure} Note that the scaling of Eq.~(\ref{scaling}) also reduces the surface tension $\sigma$ as it reduces the bubble wall width, and hence the relative contribution of the scalar field to the total gravitational wave source tensor (\ref{e:TauDef}), as the following argument makes clear. The scalar field's source tensor $\tau^\phi$ is proportional to the product of $\sigma$ with the area per unit volume of the phase boundary, and the area per unit volume is at most of order $1/\Rbc$, which is unaffected by the scaling. Hence $\tau^\phi \to r^{-1}\tau^\phi$. At the same time, the scale of the fluid source tensor $\tau^\text{f}$ is set by the latent heat of the transition ${\cal L}$, which is independent of $r$. Hence the the relative importance of the scalar field to the fluid goes as $\ell/\Rbc$ as it decreases towards physical values. This is consistent with the argument given in the introduction that the scalar field contributes negligibly to the gravitational waves, as the ratio of the energy in the scalar field to the energy in the fluid goes as the ratio of the volume in the phase boundary to the total volume, which is $\ell/\Rbc$. This argument assumes that the bubble walls travel at constant speed, so that the effective surface energy is constant. Hence if the bubble walls are weakly coupled to the plasma, they can continue to accelerate until they collide~\cite{Bodeker:2009qy}. In this ``run-away'' scenario, scalar fields can contribute importantly to the gravitational radiation. \section{Discussion and conclusions} In this paper we have reported on new numerical simulations of the production of gravitational radiation at a first order phase transition in the early universe. Following standard methods, we model the contents of the universe as a scalar order parameter coupled to a relativistic fluid, with a thermal effective potential~(\ref{e:ScaPot}) and dissipative coupling~(\ref{e:CouTer}). This model captures the essential physics of the transition, which proceeds by the nucleation and growth of bubbles of the low temperature phase. The most important parameters of the transition are the latent heat density relative to the total energy density $\strengthPar{T}$, which characterises the strength of the transition, the bubble wall velocity $\vw$, which is determined by $\strengthPar{T}$ and the field-fluid coupling $\eta$, and the bubble nucleation rate parameter $\be$, which determines the average bubble separation. Most of our simulations are carried out at $\strengthPar{T} \simeq 0.01$, the order of magnitude expected at an electroweak transition, from which we can extrapolate to other values. We check our extrapolation with a smaller number of simulations at $\strengthPar{T} \simeq 0.1$, and with a scaling argument which changes parameters in the potential without affecting $\strengthPar{T}$. We simulate for a range of phase boundary speeds $\vw$, covering deflagrations and detonations. Instead of fixing the bubble nucleation rate parameter $\be$, we directly fix the average bubble separation $\Rbc$ by nucleating $N_\text{b} = V/\Rbc^3$ bubbles simultaneously. We concentrate on the gravitational waves generated by the fluid motion, as the vast majority of the latent heat of the transition is transformed into thermal and kinetic energy of the fluid. We show that the gravitational wave density parameter~(\ref{e:OmgwEqn}) is proportional to the fourth power of the mean square fluid velocity, the ratio of lifetime of the source to the Hubble time, and the ratio of length scale of the source to the Hubble length. Our results confirm those of a more limited set of simulations reported in Ref.~\cite{Hindmarsh:2013xza}. The fluid kinetic energy is mostly in the form of sound waves generated by the compression or rarefaction of the fluid around the advancing phase boundary. Some rotational flow is generated by the collisions, but at a subdominant level. The sound waves remain for as long as we simulate, long after the phase transition completes. It was shown in Ref.~\cite{Hindmarsh:2013xza} that when viscosity is included the viscous damping time is much longer than the Hubble time for most phase transitions of interest. It was argued that the lifetime of the source, the shear stress generated by the sound waves, is approximately the Hubble time. In this paper we detail the calculation which shows that the lifetime parameter $\tau_\text{v}$, controlled by the decay and decorrelation of the shear stresses, is in fact exactly the Hubble time. The length scale of the source, approximately the average bubble separation in Ref.~\cite{Hindmarsh:2013xza}, is here measured directly from the fluid flow. With this refinement, we show that the proportionality constant $\tilde\Omega_\text{GW}$ in the gravitational wave density parameter equation~(\ref{e:OmgwEqn}) varies little between phase transitions with different strength and bubble wall speeds. Indeed, our measurements show that $8\pi\tilde\Omega_\text{GW} = 0.8 \pm 0.1$, where the uncertainly is the root mean square fluctuation between simulations. Our new simulations are carried out on larger lattices, and give a wider dynamic range between the physical scales set by the average bubble separation and the bubble wall width. We further widen the dynamic range by nucleating all bubbles at the same time, at the slight cost of introducing ``ringing'' in the velocity power spectrum. With the increased dynamic range we are able to establish clear power laws for both velocity and gravitational wave power spectra between the physical scales. For the transitions with $\vw = 0.44$ or below, they are $k^{-1}$ and $k^{-3}$ respectively, where $k$ is the wave number, and steeper for the transitions with higher bubble wall speeds. In order to discern these power laws, we show it is important that the fluid velocity profile around the advancing bubble wall has sufficient time to approach its asymptotic self-similar form. The $k^{-3}$ (or steeper) power law for gravitational waves contrasts with the prediction of $k^{-1}$ from the standard envelope approximation, which assumes that all the energy in the system is concentrated in a thin shell at the bubble wall, and that the radiation is produced only when the shells interact. We see signs of a $k^{-1}$ power spectrum generated by the scalar field in the initial phase of bubble collision, but this component is subdominant in our simulations, and would be completely negligible when extrapolated to the scale separation in a thermal phase transition. The envelope approximation generically predicts far less gravitational radiation than is actually produced. This under-prediction stems from the incorrect modelling of the source as being the colliding bubble walls. Instead, the main source is the overlapping sound waves which are left behind after the transition has completed. We argued in \cite{Hindmarsh:2013xza} that this means that the gravitational wave energy density is boosted by the ratio of the lifetime parameter of the shear stress to the duration of the collision, which goes parametrically as $(\vwL_\text{f}\Hc)^{-1}$. In this paper we studied the numerical factor in this ratio by a careful comparison of the quantities in the envelope approximation formula (\ref{e:EnvAppFor}) to the acoustic generation formula (\ref{e:OmgwEqn}). We show that the numerical factor is of order unity, and hence we can confirm that the gravitational wave signal is boosted by the ratio of the Hubble time to the phase transition duration, which is two orders of magnitude or more for a typical first order electroweak transition \cite{Hogan:1984hx,Enqvist:1991xw}. Our simulations shed new light on gravitational waves from phase transitions in the early universe. They show that the envelope approximation needs to be replaced, both as a model and as a formula. Instead, we should model the gravitational wave generation in terms of overlapping sound waves. With this new ``acoustic'' model of gravitational wave generation, we have developed a quantitative understanding of the gravitational wave density parameter (\ref{e:OmgwEqn}), as a function of the mean square fluid velocity and the mean bubble separation. We can estimate the mean fluid velocity from hydrodynamic considerations \cite{Espinosa:2010hh}, and the mean bubble separation from the nucleation rate parameter $\be$ \cite{Enqvist:1991xw}. We have numerically determined that the gravitational wave power spectrum is a power law on the high wavenumber side of the peak, and shown that it is steeper than the $k^{-1}$ indicative of a vacuum transition. Hence potential future observations of such a gravitational wave spectrum will allow us to distinguish between a thermal and a vacuum transition. Much remains to be done. We noted that we need larger simulations to trace out the shape of the power spectrum at wavenumbers lower than the peak value, and to determine the index of the power spectrum for the transitions with faster bubble walls. They may also help in the search for bubble wall instabilities identified in~\cite{Link:1992dm,Huet:1992ex,Megevand:2013yua}. We also need to develop a theoretical understanding of the shape or the power spectrum, and most importantly making accurate quantitative predictions for future gravitational wave observatories.
{ "timestamp": "2016-01-08T02:08:33", "yymm": "1504", "arxiv_id": "1504.03291", "language": "en", "url": "https://arxiv.org/abs/1504.03291" }
\section{Introduction} \label{section:introduction} X-ray binaries (XRBs) are systems in which a compact object such as a black hole (BH) or neutron star (NS) accretes material from a secondary star. This material is believed to form an accretion disk surrounding the compact object, producing intense X-ray radiation via both blackbody radiation from the inner regions of disk itself, and Compton up-scattering of lower energy photons from a hot corona. These systems are further subdivided into two classes, based upon their properties. High-mass X-ray binaries have O or B class secondaries, are fairly steady X-ray sources and are thought to have evolved in-situ from a binary system. In contrast, low-mass X-ray binaries (LMXBs) have very faint secondaries, show dramatic changes in X-ray luminosity (X-ray bursts) and their evolutionary path is not clear. They may have evolved, through mass transfer and loss, from a situation where the donor star was more massive \citep{2002ApJ...565.1107P}, or alternatively in dense regions like globular clusters it is possible for a lone NS or BH to capture a low mass companion \citep[e.g.][]{2006csxs.book..341V}. LMXBs are highly variable systems, sometimes exhibiting an accretion disk-like spectrum (referred to as the `high/soft' state) and sometimes a power law spectrum (the `low/hard' state). Sources in the low/hard state usually also show a radio jet which disappears as the source transitions into the high/soft state \citep[][and refs therein]{2004MNRAS.355.1105F} Accretion disks are predicted to have a flared geometry, meaning that the centrally generated X-rays will illuminate the surface of the disk, heating it and causing the surface layers to evaporate \citep[e.g.][]{shakura_sunyaev,1983ApJ...271...70B,1991ApJ...374..721K,1993ApJ...412..267R,1996ApJ...461..767W, 2002ApJ...581.1297J,2002ApJ...565..455P} forming a hot disk atmosphere. The observational signature of this hot atmosphere is both X-ray and UV emission lines \citep[e.g.][]{1984ApJ...278..270B,1993ApJ...412..267R,2005ApJ...625..931J} as the X-rays are reprocessed by not only the surface of the disk, but also the surface of the secondary \citep[e.g.][]{1990A&A...235..162V}. This heating of the secondary has been proposed as a possible mechanism by which is mass transferred from the star to the accretion disk \citep[e.g][]{1981ApJ...243..970L,1982ApJ...258..260L}. As observations of XRBs have improved, blue shifted absorption lines have been observed in more than a dozen high/soft state NS and BH LMXBs \citep[][and refs therein]{2013AcPol..53..659D}. These provide good evidence of the existence of outflowing material, most likely associated with the accretion disk, although the driving mechanism of these `disk winds' is a subject of ongoing discussion. A recent review of observations of XRBs is given by \cite{2013AcPol..53..659D} and we summarize their data here. They cite outflow velocities between 400 and 3000 $\rm{km~s^{-1}}$ in 30\% of NS systems, and 100 and 1300 $\rm{km~s^{-1}}$ in 85\% of BH systems. Photoionization modeling is usually employed to obtain estimates of the physical state of the absorbing gas, and such analysis gives a column density of between $\rm{4\times10^{22}}$ and $\rm{20\times10^{22}~cm^{-2}}$ and an ionization parameter of $2.5 \leq \log(\xi) \leq 4.5$ for NS LMXBs. The ionization parameter is a measure of the ionization state of the gas, and we use the common definition \begin{equation} \xi=\frac{L_x}{nr^2} \label{equation:xi} \end{equation} where ${L_X}$ is the ionizing luminosity, n is the number density of the gas, and r is the distance between the source of ionizing flux and the gas. BH LMXBs have a wider range of properties, with a column density between $0.5\times10^{20}$ and $\rm{6\times10^{23}~cm^{-2}}$ and an ionization parameter of $1.8 \leq \log(\xi) \leq 6$. The ionization parameter is degenerate in density and distance, so in order to obtain information about the size/distance of the absorbers, it is necessary to break the degeneracy by measuring the density of the absorbing gas. This has been done for the microquasar GRO J1655-40 (\citealt{2006Natur.441..953M,2008ApJ...680.1359M} but also see \citealt{2006ApJ...652L.117N}) and these measurements suggest a relatively high density ($n\simeq10^{14}~\rm{cm^{-1}}$) which in turn implies a small radius. Those systems exhibiting absorption appear to be generally observed edge on \citep{2012MNRAS.422L..11P}, which implies an equatorial geometry for the absorbing gas. As we will discuss later, this does not necessarily mean that any outflow is also equatorial - it can equally well be a bipolar flow, that exhibits stratification in physical properties such as density, ionization parameter or both. Possible mechanisms to drive disk winds are magnetocentrifugal acceleration of gas guided by magnetic fields threading the disk \citep[e.g.][]{blandford_payne_82,1992ApJ...385..460E,2006Natur.441..953M}, radiation pressure acting on electrons or lines \citep[e.g.][]{1980AJ.....85..329I,1993ApJ...409..372S,1985ApJ...294...96S, 2002ApJ...565..455P} or thermal expansion of the hot disk atmosphere as a `thermal wind' when the gas thermal velocity exceeds the local escape velocity \citep{1983ApJ...271...70B,1996ApJ...461..767W}. Whatever the mechanism, these winds are of great interest since they firstly provide a way in which the XRB can interact with its surroundings, and secondly, if the mass flow is large enough, they could be the reason for the observed state change \citep[e.g.][]{2009Natur.458..481N,2012MNRAS.422L..11P, {2013ApJ...762..103K}}. The fact that these absorption features appear in high/soft state sources but not in low/hard state sources \citep{2013AcPol..53..659D} is further evidence that they are linked to state change. In this work, we build upon earlier simulations of thermal disk winds \cite[][hereafter L10]{2010ApJ...719..515L}. L10 modeled the launching of a wind in a system based on GRO J1655-40. In that work, a wind was launched but the velocity was too slow to account for the observed blue shifts of absorption lines and the density in the fastest parts of the wind was lower than the observed values for GRO J1655-40. Although these results suggest that thermal winds are unlikely to be the source of the absorbing material in that system, thermal driving is still an important mechanism and deserving of further investigation. This is because it is almost certain to operate at some level in LMXBs, and even if it is not the principal source of the X-ray absorbing gas, it may well be important in the overall evolution of the system. For example, L10 showed that significant mass would be lost from the disk by thermal winds, and this mass loss is of the same order of magnitude as that which would be expected to destabilize the disk and perhaps drive state change \citep{1986ApJ...306...90S}. In addition, we cannot be sure if the wind in GRO J1655-40 is typical or extreme, and new observations which are likely come from the AstroH satellite \citep[][and refs therein] {2014arXiv1412.1164D,2014arXiv1412.1173M} may provide more examples. We intend to investigate whether, with modifications to the heating and cooling rates, we can produce a thermally driven wind with physical properties more in line with current observations, and provide a framework which may be of use in understanding future observations. The simulations we present here are all computed in the same way as described in L10, with three modifications. Firstly, we consider only the optically thin case, so the radiation flux at any point in the simulation can be simply computed assuming a $1/r^2$ drop off from a centrally located source of X-rays. Secondly, we reduce the computational domain size from $20R_{IC}$ in the original simulations to just $2R_{IC}$. $R_{IC}$ is the Compton radius, defined as the location in an accretion disk where the local isothermal sound speed at the Compton temperature, $T_{C}$, of the illuminating spectral energy distribution (SED) exceeds the local escape velocity. The Compton radius is therefore given by \begin{equation} R_{IC}=\frac{GM_{BH}\mu m_H}{k_BT_C} \end{equation} where $M_{BH}$ is the mass of the central object, equal to 7$M_{\odot}$ for these simulations, $\mu$ is the mean molecular mass which we set to 0.6, and other symbols have the usual meaning. The Compton temperature for the illuminating SED in these simulations is $T_C=1.4\times10^7~\rm{K}$ which gives a Compton radius of $\rm{4.8\times10^{11}~cm}$. In all these simulations, as in L10, the disk is assumed to be flat and thin - defined via a density boundary condition at the midplane. The change in domain size is motivated by preliminary investigations which show that the acceleration zone for the wind was located inside $0.1R_{IC}$, in line with previous work \citep[e.g.][]{1983ApJ...271...70B,1996ApJ...461..767W,2002ApJ...565..455P}. In essence, this is because the most efficient acceleration of gas via thermal expansion occurs as the gas is heated past the lower equilibrium temperature on the thermal equilibrium curve (see Figure \ref{figure:stability_curves} and related discussion in the next section). This occurs at fairly low values of ${\xi}$, which is where the gas is densest, i.e. close to the central object. The gas then enters an unstable heating zone where the next stable temperature is over an order of magnitude higher. Rapid heating occurs resulting in rapid acceleration as the gas expands. This change to the domain size means that we better simulate the densest parts of the wind where absorption is most likely to occur. We neglect the outer parts of the disk which means that our calculated mass loss rates are lower limits. Finally, but central to this project, we will modify the heating and cooling rates assumed in the heating term in the thermodynamic equations. We examine several cases, each with different rates, and each representing a modification to the thermal equilibrium curve. This changes the way in which the gas passes through the `heating' region, and we will see that this can have a profound impact on the velocity, density and hence mass loss rate of the wind. In the next section, we briefly discuss our methodology. We then discuss the details of the flows produced by the different heating/cooling parameters. Finally, we discuss the relevance of our results to the ongoing discussion of thermal wind in XRBs. \section{Method} \label{section:methods} Our simulations are based upon the simulations presented in L10 (run C8). Fig. \ref{figure:luketic} shows a rerun of that simulation, but computed upon a smaller grid, concentrating on the inner parts of the flow where the acceleration occurs. The wind in this simulation, and all others presented here is shown at a time of 220 000s which represents 25 sound crossing times (${R_{IC}/c_s}$), a time that is sufficient for the wind to have settled down to a steady state. Note that this timescale, about 2.5 days, also gives us an estimate of the lag between a change in the luminosity or SED of the central source of ionizing radiation and the response of the wind through a change in structure. Of course changes in the ionization state of the wind would likely occur more quickly. There are several important features of this model that can be seen in the figure. Firstly, we see a dense, turbulent `corona' at ${\omega<0.3R_{IC},~z<0.2R_{IC}}$ where gas flow is severely inhibited by gravity and reaches steady state only in a time averaged sense. The streamlines starting just outside this region \emph{can} escape, however it is still time dependent, hence the streamlines show `kinks'. Secondly, outside the inner corona, the flow starts off moving vertically, as gas expands away from the accretion disk which lies along the ${\omega}$ axis. The streamlines bend outwards, partly due to the pressure gradient and partly as a result of conservation of angular momentum which means that the centrifugal force acting on the rotating gas is not balanced by gravity. They are self similar in this region. A fast, fairly dense flow is produced which escapes at angles less than about 45\degree. Finally, we see a fast, low density infall at polar angles. \begin{figure*} \includegraphics{fig1_hdf051dw83.eps} \caption{The density structure of model A (see Table \ref{table:wind_param}). Overplotted are streamlines (grey lines), the 80\degree~and 60\degree~sightlines (dashed lines) and arrows showing the velocity field. Also shown is the location of the M=1 contour (black line). } \label{figure:luketic} \end{figure*} Although the L10 simulation did produce a fast fairly dense wind, as mentioned in the introduction the density was 2-3 orders of magnitude lower than that inferred from observations of GRO J1655-40. Even so, the total mass loss rate through the wind was seven times the accretion rate. We use the same version of the hydrodynamics code ZEUS-2D \cite{1992ApJS...80..753S} extended by \cite{proga_stone_kallman} to carry out 2.5D simulations of the flow. In this code, radiative heating and cooling of the gas is computed using the parametrized rates first described by \cite{1994ApJ...435..756B}. This parameterization includes Compton heating and cooling, photoionization heating, recombination cooling, bremsstrahlung cooling and collisional (line) cooling as functions of temperature T and ionization parameter ${\xi}$. The ionization parameter is defined as in Equation \ref{equation:xi}. The number density of the gas is related to the density of the gas by ${n=\rho/\mu m_H}$ where the mean molecular weight ${\mu}$ is set to 0.6. Optical depth effects are not considered in the simulations here, so the factor of $L_X/r^2$ is related to the flux, reduced only by distance effects. Compton heating and cooling is given by \begin{equation} G_{Compton}=8.9\times10^{-36}\xi(T_X-4T)~\rm{(ergs~s^{-1}~cm^{3})} \end{equation} where ${T_X}$ is the temperature of the illuminating power law spectrum (set to $5.6\times10^7~\rm{K}$). Photoionization heating and recombinational cooling are subsumed into one term, referred to as ``X-ray heating/cooling'', given by \begin{equation} G_X=1.5\times10^{-21}\xi^{1/4}T^{-1/2}(1-T/T_X)~\rm{(ergs~s^{-1}~cm^{3})} \label{equation:xray} \end{equation} whilst bremsstrahlung cooling is parametrized by \begin{equation} L_b=3.3\times10^{-27}T^{1/2}~\rm{(ergs~s^{-1}~cm^{3})}. \end{equation} Finally, line cooling is given by \begin{equation} L_l=\left[1.7\times10^{-18}\exp{\left(-T_L/T\right)}\xi^{-1}T^{-1/2}+10^{-24}\right]\delta \end{equation} where ${T_L}$ has the units of temperature and parametrizes the line cooling. It is set to $1.3\times10^5$ K. The ${\delta}$ parameter allows one to reduce the effectiveness of line cooling due to opacity effects. In an optically thin plasma, ${\delta}$ is set to 1. The units of ${L_l}$ are the same as for the other rates. We are interested in investigating whether simple changes to the heating and cooling rates, thereby modifying the thermal equilibrium curve, can increase the velocity and density of the wind to better match observations. To modify these rates, we apply pre-factors to each of the mechanisms and so the equation for the net cooling rate ${\mathcal{L}}$ $\rm{(ergs~s^{-1}~g^{-1}}$), which appears in the energy conservation equation, becomes \begin{equation} \rho\mathcal{L}=n^2(A_CG_{Compton}+A_XG_X-A_lL_l-A_bL_b). \label{equation:heatcool} \end{equation} The first six lines of Table \ref{table:wind_param} gives the values of pre-factors, ${T_X}$ and ${L_X}$ for the original L10 simulation (run C8, denoted A) and six further simulations. The line cooling pre-factor effectively replaces the ${\delta}$ parameter and therefore represents a measure of line opacity. Modifications of ${A_X}$, the photoionization/recombination rate pre-factor can be justified by a change to the illuminating SED or metallicity of the gas. Calculations of the precise nature of the connection between these parameters and ${A_X}$ are beyond the scope of this work, our values are not intended to represent a particular case, rather we adjust them to produce the desired thermal equilibrium curve. We change ${A_b}$ somewhat arbitrarily to make gas thermally stable everywhere. Our aim here was to investigate the relationship between thermal instability (TI) and efficient acceleration. The upper and lower stable temperatures remain the same and so we isolate the effect of instability with this experiment. The quantity $\xi_{cold,max}$ is the value of the ionization parameter when the flow becomes thermally unstable. \cite{1965ApJ...142..531F} demonstrated that in a non-dynamical flow, subjected to isobaric perturbations, thermodynamic instability results when $\left[\delta \mathcal{L}/\delta T\right]_p>0$. This is also where the gradient ${d\ln(T)/d\ln(\xi)}$ becomes greater than 1. Table \ref{table:wind_param} also gives $T_{eq}$, the equilibrium temperature expected for $\xi_{cold,max}$. It can be shown that if line cooling is balanced by X-ray heating on the cool branch of the stability curve, then this temperature is expected to be ${4/5~T_L}$. This is 104 000K, and looking at the values in Table \ref{table:wind_param}, we see that the actual values are very close to this. We also include values of $\Xi_{cold,max}$, which is the ratio of radiation pressure to gas pressure when the flow becomes thermally unstable. This is given by $\Xi=F_{ion}/nk_bTc=\xi/4\pi k_bTc$ \citep{1981ApJ...249..422K}, where the temperature T is set to the equilibrium temperature at the onset of instability. To produce comparable simulations, we use a density boundary condition to ensure that the ionization parameter is equal to $\xi_{cold,max}$ at the Compton radius. The density at the midplane at the start of our simulations is defined by the equation \begin{equation} \rho(r)=\rho_0\left(\frac{r}{R_{IC}}\right)^{-2}, \end{equation} where ${\rho_0}$ is given by \begin{equation} \rho_0=\frac{L_Xm_H\mu}{\xi_{cold,max}R_{IC}^2}. \end{equation} This is given in the table along with ${R_{IC}}$ for each case. Since the density in the disk is proportional to $1/r^2$, this means that ${\xi}$ is a constant in the disk plane, and it is the same for all runs except Ah in which it is ten times bigger. The hydrodynamic calculations are carried out in a spherical polar coordinate system, running from 0 to 90\degree~ in angle, and from ${R_{in}}$ to ${R_{out}}$ in the radial direction. The zone spacing increases with radius, such that ${ dr_{k+1}/dr_{k}=1.05}$ giving finer discretization in the inner parts of the flow. The zone spacing reduces with increasing angle ${ d\theta_{k+1}/d\theta_{k}=0.95}$ giving more resolution close to the disk. These parameters, together with the number of points used in the two dimensions are given in Table \ref{table:wind_param}. \begin{table*} \begin{tabular}{p{6cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}} \hline Prefactors & A &Ah& B & C & D &E &F\\ \hline \hline $A_l$ & 1.0 &1.0& 0.2 & 1.0 & 1.0 & 1.0 & 0.076\\ $A_C$ & 1.0 &1.0& 1.0 & 1.0 & 1.0 & 1.0 & 1.0\\ $A_b$ & 1.0 &1.0& 1.0 & 3.9 & 1.0 & 1.0 & 1.0\\ $A_X$ & 1.0 &1.0& 4.0 & 1.0 & 1.0 & 1.0 & 1.0\\ \hline \multicolumn{5}{l}{Physical Parameters}\\ \hline $T_X (10^6~$K) & 56 & 56 & 56 &56 & 0.80 & 230 & 295 \\ $L_X (3.3\times10^{37}~\rm{ergs~s^{-1}})$& 1 & 10 & 1 & 1 & 1& 1 & 1\\ $log(\xi_{cold,max})$& 2.10 & 2.10 & 0.91 & N/A & N/A & 2.07 & 1.2\\ $log(\Xi_{cold,max})$& 1.33 & 1.33 & 0.17 & N/A & N/A & 1.32 & 0.43\\ $T_{eq}(\xi_{cold,max}) (10^3~\rm{K})$ & 111 & 111 & 106 & N/A & N/A & 109 & 113\\ $\rho_0 (10^{-12}~\rm{g~cm^{-3})}$ & 1.14 & 11.4 & 17.4 & 1.14 & 1.14 & 22.4 & 281 \\ $R_{IC} (10^{10}~$cm) & 48.2 & 48.2 & 48.2 & 48.2 & 3380 & 11.5 & 9.15 \\ \hline \multicolumn{5}{l}{Grid parameters}\\ \hline $R_{min} (10^{10}~$cm) & 2.4 & 2.4 & 2.4 & 2.4 & 2.4 & 1.2 & 1.2 \\ $R_{max} (10^{10}~$cm) & 96& 96& 96& 96& 96& 96& 96\\ $R_{ratio}$ &1.05&1.05&1.05&1.05&1.05&1.05&1.05\\ $N_R$ & 80& 80& 80& 80& 80& 80& 100\\ $\theta_{min}$ & 0.0& 0.0& 0.0& 0.0& 0.0& 0.0& 0.0\\ $\theta_{max}$ & 90.0& 90.0& 90.0& 90.0& 90.0& 90.0& 90.0\\ $\theta_{ratio}$ & 0.95& 0.95& 0.95& 0.95& 0.95& 0.95& 0.95\\ $N_{\theta}$ & 100& 100& 100& 100& 100& 100& 100\\ \hline \multicolumn{5}{l}{Wind properties}\\ \hline $V_r(max~blueshifted)/100 ~\rm{km~s^{-1}}$ & 4.47 & 6.76 & 6.78 & 4.28 & 2.11 & 14.5 & 16.3\\ $V_r(\rho>1e12,max~blueshifted) /100 ~\rm{km~s^{-1}}$ & 1.18 & 1.98 & 3.26 & 0.203 & 0.186 & 5.03 & 2.51\\ $n_H~(60\degree~sightline)(\times10^{22}~\rm{cm^{-2})}$ & 5.71 & 53.0 & 74.2 & 1.01 & 0.281 & 13.3 & 15.0\\ $n_H~(80\degree~sightline)(\times10^{22}~\rm{cm^{-2})}$ & 46.3 & 476 & 441 & 22.6 & 24.1 & 16.6 & 255\\ $\dot{M}_{wind,disk} (\dot{M}_{acc})$ & 3.72 & 3.61 & 41.5 & 0.878 & 0.723 & 4.20 & 25.7\\ $\dot{M}_{wind,outer}(\dot{M}_{acc})$ & 3.45 & 3.36 & 38.3 & 0.638 & 0.758 & 3.85 & 24.9\\ \hline \end{tabular} \caption{The heating and cooling parameters adopted in the simulations, and some key parameters of the resulting winds.} \label{table:wind_param} \end{table*} \begin{figure}[h] \includegraphics{fig2.eps} \caption{Thermal equilibrium curves for the seven cases considered. The solid curves show the equilibrium temperature ($T$) vs ${\xi/T}$. For each case, the crosses represent actual data from the 60\degree~ sightline and the circles are from the 80\degree~ sightline. The size of the symbol shows the radial distance along the sightline, with larger symbols representing the largest distances. The triangles show the point at which the heating curve becomes unstable.} \label{figure:stability_curves} \end{figure} The solid lines on Figure \ref{figure:stability_curves} show the thermal equilibrium temperature predicted for the different cases plotted against ${\xi/T}$, which is equal to the ratio between the radiation and gas pressures. In an outflow, the gas pressure cannot increase along a streamline. This means that ${\xi/T}$ must always increase\footnote{Note that this is in fact only true in the case of weak or zero magnetic fields. If magnetic fields are strong, then the gas pressure \emph{can} increase and the gas can move to the left on the equilibrium curve \citep{1992ApJ...385..460E,2000ApJ...537..134B}}, and so as a parcel of gas is heated, starting at the bottom left of the graph, it will follow these curves until the gradient becomes negative (the onset of TI, if present). This location on the curve represents the maximum temperature of the cold branch. The triangle symbol on the graphs shows this point. At this point, one expects the gas to quickly heat up to the upper equilibrium temperature. This maximizes the rate of energy transfer between the radiation field and the gas, driving expansion and hence acceleration. Thus it is in the unstable zone where the most `efficient' acceleration of the gas takes place in order to form an outflow. The behavior of the different cases is best explained in the context of the shape of these thermal equilibrium curves. Case A and Ah have identical thermal equilibrium curves, becoming unstable at the same value of ${\xi}$ and reaching the same upper equilibrium temperature (set by the balance of Compton heating and cooling). The difference in luminosity between the two cases has no effect on the shape of the equilibrium curves, however as we will see in the next section, it does affect how the gas is heated. Given the same unstable zone, but enhanced radiation density, one would expect case Ah to produce a faster, denser flow \citep[see][]{1983ApJ...271...70B}. Case B has reduced line cooling and enhanced photoionization heating. This means that the gas becomes unstable at a lower ${\xi}$. Changes to the SED have been shown to have this effect \citep[e.g., see][who use detailed photoionization calculations]{ 2000ApJ...537..134B}, so our approach is not unreasonable. The distance to the radiation source will be largely unchanged, so low ${\xi}$ equates to higher density and so one would expect that the resulting outflow would be denser. Case C and D are both attempts to remove the unstable zone entirely. Case C has strongly enhanced bremsstrahlung cooling, through an increase in ${A_b}$. We did this, somewhat arbitrarily, so that there is no TI for ${10^5\leq T\leq10^6}$ but the equilibrium temperatures at the low and high end are the same as in A, Ab and B. Our goal here was to isolate the importance of TI to acceleration of the gas. Case D achieves the same thing, by reducing the X-ray temperature. This reduces both the initial photoionization heating rate, and also the upper equilibrium temperature. Finally, cases E and F represent solutions with two unstable zones. In case E, we simply increase the X-ray temperature. This does indeed produce two unstable zones, however the second unstable zone is at a lower pressure than the first. In case F, we increase the heating rate on the lower branch in a similar way to case B, by reducing line cooling. This shifts the lower unstable zone to lower ${\xi}$ but leaves the upper unstable zone unchanged. The aim for these two cases is to see if one can get a faster wind by extending the acceleration zone. It is also interesting to see if gas `collects' at the stable zone between the two unstable ones. \section{Simulation results} \label{section:sim_res} We present the results of our simulations in the context of the thermal equilibrium curves. This allows us to see how the hydrodynamics of the winds affects the thermal balance, and thus give insight into how the winds are accelerating. The symbols, plotted over the solid lines, on Figure \ref{figure:stability_curves} show the relationship between the temperature and ionization parameter divided by that temperature in a range of cells along two sightlines. We can therefore see if the equilibrium temperature is reached in the simulations. We note that there are no points on the lower stable branch of the stability curves for any of the cases. This is by design, since points along the lower branch are essentially in the disk, and merely exhibit turbulent motions. We have carried out detailed resolution studies using 1D simulations which demonstrated that the behavior of the wind in the regions sampled by our sightlines is not dependent on resolving the transition from cool stable branch to unstable zone. Our limited 2D resolution study confirmed this. Turning our attention to individual cases, we first examine cases A and Ah. The increase in luminosity has had the desired effect, in that all of the points for case Ah lie on the upper branch of the unstable zone, whilst those of case A lie below the curve. Adiabatic cooling as the hot gas expands holds the temperature below the expected stable temperature. This indicates that more energy has been transferred to the gas in the higher luminosity case. Note however that the points from the two simulations occupy remarkably similar locations in ${T-\xi}$ space, given that Ah has an order of magnitude higher luminosity. This is because the density of the wind has increased accordingly, giving a very similar value of the ionization parameter for both flows. Case B has, as expected, produced hotter gas at lower ${\xi}$. However, as with case A, the gas is cooler than the upper stable temperature. This suggests that with a higher luminosity, one could obtain even hotter gas. Interestingly, the 80\degree ~points are all very close, indicating that we are sampling points at the same relative distance along each streamline that the sightline intercepts. Cases C and D, where there is no unstable zone, the points tend to lie along the thermal equilibrium curves with some exceptions. The innermost 60\degree sightline data of case C are cooler than expected if radiative processes dominated the heating/cooling. This is because adiabatic expansion is acting as an additional cooling mechanism. This simulation fails to launch a wind in the current simulation domain. Case D also fails to launch a strong wind and the polar infall which is always seen in these simulations extends to much larger inclination angles. Compression of the gas by this flow at low radii produces some gas that is heated above the upper stable temperature. This occurs for gas with an ionization parameter greater than about 10, and is not shown on Figure \ref{figure:stability_curves}. Case E shows two new effects. Firstly, despite there being two unstable zones, the gas is unable to access the upper unstable zone since this lies at higher pressure. Therefore the gas jumps straight from the lower unstable point towards the upper stable branch. Adiabatic cooling prevents the gas from reaching the upper branch and the existence of gas in a formally prohibited part of the thermal equilibrium graph demonstrates that hydrodynamic effects are an important consideration in the thermal balance of this type of wind. It is often assumed that gas will avoid unstable zones \citep[e.g.][and refs therein]{2013MNRAS.436..560C}, providing an explanation of why some ionization states are not seen in observations. This simulation demonstrates that the situation may be more complex. Finally, case F does produce points close to the stable zone between the two instabilities since that part of the curve is now physically accessible to the flow. In the lower section of Table \ref{table:wind_param} we provide some of the physical properties of the simulated outflows. First, we list the maximum radial velocity seen in each of the simulations. It is of the same order of magnitude (a few hundreds of kilometers per second) as the blue shifted absorption lines seen in LMXBs \citep[e.g.][and references therein]{2013AcPol..53..659D, 2013AdSpR..52..732N}, however the velocity of gas is not the only important factor. The density is also important since only dense gas will produce observable absorption. The maximum velocity in regions with a particle density greater than $\rm{10^{12}~cm^{-1}}$ is much smaller. This is simply because we are probing regions deeper into the wind, where the flow is still accelerating, and we see that the two stable cases do not produce fast enough gas at high density. Cases B, E and F do show fast moving gas with density above the threshold density. In cases E and F, the fast, dense gas is limited to a very narrow range of angles, in case E it is within 3 degrees of the disk and would therefore probably not be observable. In case E, the fastest dense gas is at small radii around ${\theta=75\degree}$. Whilst this could in principle be observed, the small angular range over which an absorption feature would appear would likely mean it would be transient. Another important physical parameter that can be derived from observations is the total column density. This is between about $10^{20}-10^{23}~\rm{cm^{-2}}$ for all kinds of LMXBs \citep{2013AcPol..53..659D}. Our simulations produce column densities of the right order of magnitude for equatorial sight lines, and indeed the 80\degree sightline would be Compton thick in case Ah and B. Finally, we give the mass loss rate, both leaving the disk ($\dot{M}_{wind,disk}$) and leaving the computational domain ($\dot{M}_{wind,outer}$). These rates are calculated directly from the simulation results, using $\dot{M}=\sum A\rho v$ where $A$ is the area represented by a cell, either on the disk or at the outer boundary, $\rho$ is the density of the cell, and $v$ is either the vertical velocity for disk cells, or the radial velocity for cells at the outer boundary. The summation is carried out over all relevant cells. We report these values in terms of the accretion rate, $\dot{M}_{acc}=4.4\times 10^{17}~\rm{g~s^{-1}}$ (assuming an efficiency of 8.3\%). In L10, their version of model A produced an outflow of about 7 times the accretion rate whereas we see an outflow rate of about half that much. This is simply because our simulation has a much reduced radial extent compared to the L10 run. Increasing the luminosity (model Ah) increases the mass loss rate, but the ratio of mass loss to accretion rate remains the same. Cases B and F produce significantly higher mass loss rates, in excess of the threshold of $15\dot{M}_{acc}$ that \cite{1986ApJ...306...90S} demonstrated could induce oscillations in an accretion disk. It should be noted that the winds are emerging from the disk far outside the radius where most of the radiation is produced. Almost all of the ionizing radiation from a thin disk in a system like this one is produced within 100 gravitational radii of the centre. By comparison, the induced Compton radius is about half a million gravitational radii. Therefore, at least for the purposes of these simulations, the model of a point-like, unvarying source of radiation at the centre of the simulation is not necessarily invalidated by the prediction of large mass losses from the outer parts of the disk, at least over the timescale of the simulation. \section{Synthetic absorption line profiles} \label{section:spectra} It is useful to produce synthetic line profiles for our simulations, in order to get some idea of whether the outflows could, in principle, produce the absorption features observed in XRBs. Computation of the ionization state and level populations of the gas is beyond the scope of this work, and we calculate the absorption using the simplified scheme discussed below. This scheme takes account of thermal line broadening, and the doppler shifting due to the bulk flow of the wind, however we do not model line emission here. The opacity due to a resonance line (uncorrected for stimulated emission) is given by \begin{equation} \alpha(\nu)=\frac{h\nu}{4\pi}n_1B_{12}\phi(\nu) \end{equation} We arbitrarily set $B_{12}=1$ and use the hydrogen number density for $n_1$. Therefore, our line opacity becomes \begin{equation} \alpha_{\nu}=\frac{h\nu}{4\pi}n_H\phi(\nu). \end{equation} For the line shape $\phi(\nu)$ we use a gaussian line profile of the form \begin{equation} \phi(\nu)=\left(\frac{c}{\nu_{0}}\right)\sqrt{\frac{m}{2\pi k_BT}}exp\left(-\frac{mc^2\left(\nu-\nu_0\right)^2}{2k_BT\nu_0^2}\right) \end{equation} For each radial cell i, we compute the line opacity as a function of frequency, and then doppler shift that line profile to take account of the bulk flow velocity. We then obtain the total, frequency dependent optical depth by summing up the opacity at each frequency due to each cell of radial thickness dr, \begin{equation} \tau(\nu)=\sum_{i=inner}^{i=outer}\alpha_i(\nu)dr. \end{equation} This sum is computed from the innermost radial cell to the outermost, thus making the implicit assumption that the continuum source is point-like, and located at the origin. A measure of the absorption profile of a generic line is then computed \begin{equation} F(\nu)=e^{-\tau(\nu)}. \end{equation} Since no attempt is made to compute the density of any given ionic species, these spectra are in no way accurate representations of what we would expect to observe for a given system. Rather, they are just a means of comparing runs, since each spectrum is calculated in a consistent way. \begin{figure*} \includegraphics{fig3.eps} \caption{Simulated spectra for the 60\degree (left hand upper panel) and 80\degree (right hand upper panel) sight lines. Lower panels show ionization parameter vs radial velocity for all cells in the two sight lines. The size of the symbol represents the distance from the central mass ( small symbols are for small distances). Crosses represent cells with a number density of less than $\rm{10^{12}~cm^{-3}}$ and filled circles show cells with a density above this threshold. Negative velocities represent motion away from the centre of the simulation, and thus blue shifted absorption. Note that in the bottom right hand panel, the black and blue points overlie one another.} \label{figure:simulated_spectra} \end{figure*} The upper panels of Figure \ref{figure:simulated_spectra} show the results of the line profile calculation for the 60\degree~ (left panel) and 80\degree~ (right panel) sightlines. We immediately notice that the 80\degree~ features are much deeper than those seen at 60\degree. This is simply due to the higher density at the base of the wind. This means that the ionization parameter is also generally lower in the 80\degree sightline, and so it is likely that different species would be observed at the two angles. A general feature of most of the spectra is an absorption feature close to zero velocity, and a second feature at a blueshift that varies from model to model. Although absorption features are seen at zero velocity in observations, it is fair to say that the very strong features we predict here are not commonly observed. We know however that by including line emission these absorption features would be weakened. We are more interested in the second absorption feature which appears at velocities between about $\rm{-150~km~s^{-1}}$ and $\rm{-800~km~s^{-1}}$. Looking in detail at each of the cases now, we first see that the increased luminosity of case Ah over case A was partially successful in that the absorption is stronger. This is of course due to the increased density of the outflow. This can be seen clearly in the lower left panel, where the cells in case A have density less than $\rm{10^{12}~cm^{-3}}$ (represented by the cross symbol) whereas the same cells in case Ah have density greater than $\rm{10^{12}~cm^{-3}}$ (circles). However, increase in density was not our only aim, we also wanted to increase the velocity at which the absorption was seen. In this we have been only partially successful, with the blue shifted absorption feature shifting to only very slightly higher velocities. Case B is perhaps the most successful new model with blue shifted absorption features seen in both the 60\degree and 80\degree sightlines. The blue shifted absorption feature at 80\degree~ is deep and its width is only due to thermal broadening. This is because all of the cells producing the feature are at nearly the same velocity. This is in turn because photons flying along this sightline encounter gas in a very similar physical regime at all radii showing that, close to the disk at least, the gas is flowing along highly self similar streamlines in this case. Since the ionization parameter is set to be the same at all radii at the mid plane, all gas will start in the same physical state. In case B, the physical state of the gas has evolved similarly at all radii, `remembering' its initial conditions, up to the 80\degree point. In contrast, by the time it has moved up to the 60\degree sightline, that `memory' of the starting state has been lost, and gas at different radii is in different physical conditions. Thus the absorption is produced by cells at a range of velocities and so the feature is much shallower but very broad. As already discussed above, case C and D fail to produce fast outflows. This is clearly shown in the lower panels, where all the cells from these simulations are clustered around zero velocity. These two cases produce relatively narrow (the temperature of the gas is lower) features at zero velocity. As shown in Table \ref{table:wind_param} cases E and F do both produce dense, fast moving material. However, the material in case E is very close to the accretion disk, and is missed by both sight lines shown here. Appearing at angles greater than 87\degree, it is unlikely that it would be observable in any case. Case F does produce fast material at lower ${\theta}$ and this is seen in the spectra as absorption around $\rm{-200~km~s^{-1}}$. Faster gas does exist in the simulation, but only over a very narrow range of angles. \section{Discussion} \label{section:discussion} Our aim was to see if simple, physically motivated changes to the heating and cooling balance of the thermal wind simulated in L10 could produce a wind model that was more in line with observations. There are three main observational measures, the line velocities, the column density and the density of the line forming region. The wind model described by L10 failed to produce any gas with velocities greater than $\rm{100~km~s^{-1}}$ with a density greater than $\rm{10^{12}~cm^{-3}}$ and we come to a similar conclusion. Since L10 were trying to replicate the properties of the outflow observed in GRO J1655-40, which seems to have a very high density of $\rm{5\times10^{15}cm^{-3}}$ \citep{2006Natur.441..953M} the conclusion was that the model failed in that aim. Follow up work reduced the density estimate to around $\rm{10^{14}cm^{-3}}$ \citep{2008ApJ...680.1359M,2009ApJ...701..865K} but this is still well above the densities seen in the L10 model. This failure of a thermal wind model to replicate the velocity and density seen in the observations suggests that for GRO J1655-40 at least, a thermal wind seems an unlikely source of the observed absorbing gas. Nonetheless, the predicted wind produces a column density in line with observations, and a mass loss rate in excess of the accretion rate. The enhanced luminosity version of the L10 model, case Ah, does increase the velocity and perhaps more importantly the density of the wind. A radial velocity of $\rm{200~km~s^{-1}}$ of gas with a number density greater than $\rm{10^{12}~cm^{-3}}$ is still slow and less dense compared to the measurements of GRO J1655-40 mentioned above, but it is not unreasonable to think that this wind would produce observable features, and further work to characterize the ionization state of this wind allowing calculation of detailed spectra would be worthwhile. The ionization parameter for the fastest moving parts of the wind has a narrow range, centered on ${\log{\xi}\sim4}$, and is certainly similar to that inferred for the absorbing gas in many systems. Case B provides the best illustration of how simple changes to the heating and cooling rates in a simulation can affect the velocity and density of the resulting wind. We reduced line cooling by a factor of 5, and increased the photoionization heating rate by a factor of 4. Both of these changes can be broadly justified, by the effects of line optical depth in the first case, and changes to SED and gas metallicity in the second. This simple change made the gas thermally unstable at ${\xi}$ one order of magnitude lower. The radial location of the unstable gas is largely unchanged, so the change in ${\xi}$ means that \emph{denser} gas is accelerated, producing a denser and faster wind. Although the density and velocity is only a little higher than case Ah, much more interesting is the huge increase in mass loss rate, now 40 times the accretion rate (even though we only simulate a relatively small domain). This is almost 3 times the rate that \cite{1986ApJ...306...90S} showed would induce instabilities in the disk. Therefore, even if thermal winds are unable to reproduce the observed line absorption seen in XRBs, they may well provide a mechanism for XRB state change and so searching for an observational signature is a worthwhile exercise. Another interesting result from these simulations is that there is a gas with `forbidden' values of ${\xi}$, i.e. from the second, hotter, unstable zone of the stability curve in cases E and F. It has been suggested \citep[e.g.][] {2013MNRAS.436..560C} that species which are expected to have peak abundances in gas with such forbidden ionization parameters would not be seen in observations. Whilst our results do not necessarily disprove such assumptions, they do illustrate that hydrodynamic effects (i.e. adiabatic expansion) make the situation more complex. It is often assumed that disk winds in XRBs are equatorial, because absorption is seen preferentially in sources which exhibit dips \cite{2012MNRAS.422L..11P}. We find that the wind is in fact bipolar, but the outflow is highly stratified with the high density region of the wind near the disk. Therefore, in our simulation, absorption is only seen for equatorial sight lines even though the wind flows out over a relatively wide range of angles. Similar results were seen in L10 and \cite{2012ApJ...758...70G} who computed line profiles based on L10's as well as other disk wind simulations. This stratification is also important with respect to the observed variability of X-ray absorption in XRBs \citep[][and refs therein]{2013AdSpR..52..732N}. This is very well illustrated by cases E and F, both of which produce absorbing gas in narrow angular ranges. If the illuminating spectrum is variable, the angle at which particular species would be seen could change - thereby making the absorption lines associated with those species vanish. The wind could remain strong in this case, but would need to be detected in different species. Secondly, when we compare cases A and Ah, which differ only in luminosity, we see that the density of the wind solution changes significantly giving rise to a very similar ionization parameter. This also has relevance to studies of variable sources, where increases in luminosity are sometimes called upon to explain increases in ${\xi}$ and hence the disappearance of some features \citep[e.g.][]{2014A&A...571A..76D}. Our results show that it may be overly simplistic to assume the density remains constant in such cases, and a more detailed investigation is required, taking into account how the wind responds to the increase in luminosity. \section{Future work} The simulations we have presented here use a simplified heating and cooling scheme, which permits swift exploration of parameter space. In addition, radiative transfer through the wind is treated in the optically thin limit. Previous detailed analysis of such hydrodynamic models \citep{sim_proga_10,2014ApJ...789...19H} have shown that a more thorough treatment of radiative transfer including scattering can have a significant effect. We therefore plan to run such simulations on the more successful models from this work (i.e. those that produced high velocity, dense flows). Not only will this work give more information regarding the validity of our modified heating/cooling rates, but it will also produce detailed ionization data for the wind and spectra. It will also provide information regarding line emission contribution. \section*{Acknowledgements} This work was supported by NASA under Astrophysics Theory Program grants NNX11AI96G and NNX14AK44G. The authors would like to thank the anonymous referee for very useful comments that have improved the paper. \bibliographystyle{mn2e}
{ "timestamp": "2015-05-25T02:09:50", "yymm": "1504", "arxiv_id": "1504.03328", "language": "en", "url": "https://arxiv.org/abs/1504.03328" }
\section*{Appendix} \begin{figure*}[tb] \begin{center} \includegraphics[width=0.90\textwidth]{all_objects.jpg} \end{center} \caption{ \textbf{Examples of objects and object parts} from our dataset of 116 objects and 249 parts. Each image shows the point cloud representation of an object and one of its parts highlighted. } \label{fig:all_objects} \end{figure*} \section{Our Approach} \vspace*{\sectionReduceBot} The intuition for our approach is that many differently-shaped objects share similarly-operated object parts; thus, the manipulation trajectory of an object can be transferred to a completely different object if they share similarly-operated parts. We formulate this problem as a structured prediction problem and introduce a deep learning model that handles three modalities of data and deals with noise in crowd-sourced data. Then, we introduce the crowd-sourcing platform Robobarista to easily scale the collection of manipulation demonstrations to \jae{non-experts} on the web. \vspace*{\subsectionReduceTop} \subsection{Problem Formulation} \vspace*{\subsectionReduceBot} \label{sec:prob_form} The goal is to learn a function $f$ that maps a given pair of point-cloud $p \in \mathcal{P}$ of object part and language $l \in \mathcal{L}$ to a trajectory $\tau \in \mathcal{T}$ that can manipulate the object part as described by \jae{free-form natural language} $l$: $$f : \mathcal{P} \times \mathcal{L} \rightarrow \mathcal{T}$$ \noindent \textbf{Point-cloud Representation.} Each instance of point-cloud $p \in \mathcal{P}$ is represented as a set of $n$ points in three-dimensional Euclidean space where each point $(x,y,z)$ is represented with its RGB color $(r,g,b)$: $p = \{p^{(i)} \}^n_{i=1} = {\{(x,y,z,r,g,b)^{(i)}\}}^n_{i=1}$. The size of the set vary \jin{for each instance}. These points are often obtained by stitching together a sequence of sensor data from an RGBD sensor \cite{izadi2011kinectfusion}. \noindent \textbf{Trajectory Representation.} \label{sec:trajrep} Each trajectory $\tau \in \mathcal{T}$ is represented as a sequence of $m$ \textit{waypoints}, where each waypoint consists of gripper status $g$, translation $(t_x, t_y, t_z)$, and rotation $(r_x, r_y, r_z, r_w)$ with respect to the origin: $\tau= \{\tau^{(i)}\}^m_{i=1} = {\{(g, t_x, t_y, t_z, r_x, r_y, r_z, r_w)^{(i)}\}}^m_{i=1}$ where $g \in \{\text{``open''},\text{``closed''},\text{``holding''}\}$. $g$ depends on the type of the end-effector, which we have assumed to be a two-fingered gripper like that of PR2 or Baxter. The rotation is represented as quaternions $(r_x, r_y, r_z, r_w)$ instead of the more compact Euler angles to prevent problems such as the gimbal lock \cite{saxena2009learning}. \noindent \textbf{Smooth Trajectory.} To acquire a smooth trajectory from a waypoint-based trajectory $\tau$, we interpolate intermediate waypoints. Translation is linearly interpolated and the quaternion is interpolated using spherical linear interpolation (Slerp) \cite{shoemake1985animating}. \vspace*{\subsectionReduceTop} \subsection{Can transferred trajectories adapt without modification?} \label{sec:transfer_adapt} \vspace*{\subsectionReduceBot} Even if we have a trajectory to transfer, a conceptually transferable trajectory is not necessarily directly compatible if it is represented with respect to an inconsistent reference point. \jin{To make a trajectory compatible with a new situation without modifying the trajectory, we need a representation method for trajectories, based on point-cloud information, that allows a \textit{direct transfer of a trajectory without any modification}.} \noindent \textbf{Challenges.} Making a trajectory compatible when transferred to a different object or to a different instance of the same object without modification can be challenging depending on the representation of trajectories and the variations in the location of the object, given in point-clouds. For robots with high degrees of freedom arms such as PR2 or Baxter robots, trajectories are commonly represented as a sequence of joint angles (in configuration space) \cite{thrun2005probabilistic}. With such representation, the robot needs to modify the trajectory for an object with forward and inverse kinematics even for a small change in the object's position and orientation. Thus, trajectories in the configuration space are prone to errors as they are realigned with the object. They can be executed without modification only when the robot is in the exact same position and orientation with respect to the object. One approach that allows execution without modification is representing trajectories with respect to the object by aligning via point-cloud registration (e.g. \cite{forbes2014robot}). However, if the object is large (e.g. a stove) and has many parts (e.g. knobs and handles), then object-based representation is prone to errors when individual parts have different translation and rotation. This limits the transfers to be between different instances of the same object that is small or has a simple structure. \jin{ Lastly, it is even more challenging if two objects require similar trajectories, but have \jae{slightly} different shapes. And this is made more difficult by limitations of the point-cloud data. As shown in left of Fig.~\ref{fig:espresso_transfer}, the point-cloud data, even when stitched from multiple angles, are very noisy compared to the RGB images. } \noindent \textbf{Our Solution.} Transferred trajectories become compatible across different objects when trajectories are represented 1) in the task space rather than the configuration space, and 2) in the principal-axis based coordinate frame of the object \textit{part} rather than the robot or the object. Trajectories can be represented in the task space by recording only the position and orientation of the end-effector. By doing so, we can focus on the actual interaction between the robot and the environment rather than the movement of the arm. It is very rare that the arm configuration affects the completion of the task as long as there is no collision. With the trajectory represented as a sequence of gripper position and orientation, the robot can find its arm configuration that is collision free with the environment using inverse kinematics. However, representing the trajectory in task space is not enough to make transfers compatible. It has to be in a common coordinate frame regardless of object's orientation and shape. Thus, we align the negative $z$-axis along gravity and align the $x$-axis along the principal axis of the object \textit{part} using PCA \cite{hsiao2010contact}. With this representation, even when the object part's position and orientation changes, the trajectory does not need to change. The underlying assumption is that similarly operated object parts share similar shapes leading to \jin{a similar direction in their principal axes}. \subsection{Baselines} \noindent \textbf{Baselines.} We compared our model against several baselines: \noindent 1) \textit{Random Transfers (chance)}: Trajectories are selected at random from the set of trajectories in the training set. \noindent 2) \textit{Object Part Classifier}: \jae{To} test our hypothesis that intermediate step of \jae{classifying} object part does not guarantee successful transfers, we built an object part classifier using multiclass SVM \cite{tsochantaridis2004support} on point-cloud features including local shape features \cite{koppula2011semantic}, histogram of curvatures \cite{Rusu_ICRA2011_PCL}, and distribution of points. Once classified, the nearest neighbor among the same object part class is selected for transfer. \noindent 3) \textit{Structured support vector machine (SSVM)}: \jae{It is a standard practice to hand-code features for SSVM \cite{tsochantaridis2005large}, which is solved with the cutting plane method \cite{joachims2009cutting}. We used our loss function (Sec.~\ref{sec:metric}) to train and experimented with many state-of-the-art features.} \iffalse \noindent 3) \textit{Structured support vector machine (SSVM)}: It is \jae{more standard} to use hand-coded features with SSVM \cite{tsochantaridis2005large}, which can be solved with cutting plane method \cite{joachims2009cutting} along with our loss function (Sec.~\ref{sec:metric}). \jae{We experimented with many state-of-the-art features.} \fi \iffalse \noindent \jin{ 3) \textit{Structured support vector machine (SSVM)}: SSVM \cite{tsochantaridis2005large} conventionally takes hand-coded features and is solved using the cutting plane method \cite{joachims2009cutting}. We used our loss function (Sec.~\ref{sec:metric}) to train and experimented with many state-of-the-art features. } \fi \noindent 4) \textit{Latent Structured SVM (LSSVM) + kinematic structure}: The way an object is manipulated depends on its internal structure, whether it has a revolute, prismatic, or fixed joint. Borrowing from \citet{sturm2011probabilistic}, we encode joint type, center of the joint, and axis of the joint as the latent variable $h \in \mathcal{H}$ in Latent SSVM \cite{yu2009learning}. \noindent 5) \textit{Task-Similarity Transfers + random}: It finds the most similar training task using $(p, l)$ and transfer \jin{any} one of the trajectories from the most similar task. The pair-wise similarities between the test case and every task of the training examples are computed by average mutual point-wise distance \jin{of two point-clouds} after ICP \cite{besl1992method} and \jf{similarity in bag-of-words representations of language}. \noindent 6) \textit{Task-similarity Transfers + weighting}: \jf{The previous method is problematic when non-expert demonstrations for the same task have varying qualities.} \citet{forbes2014robot} introduces a score function for weighting demonstrations based on weighted distance to the ``seed'' (expert) demonstration. Adapting to our scenario of not having any expert demonstration, we select the $\tau$ that has the lowest average distance from all other demonstrations for the same task (similar to noise handling of Sec.~\ref{sec:deep}). \noindent 7) \textit{Our model without Multi-modal Layer}: This deep learning model concatenates all the input of three modalities and learns three hidden layers before the final layer. \noindent 8) \textit{Our model without Noise Handling}: Our model is trained without noise handling. All of the trajectory collected from the crowd was trusted as a ground-truth label. \noindent 9) \textit{Our model with Experts}: Our model is trained using \jin{trajectory demonstrations from an expert} which were collected for evaluation purpose. \section{Deep Learning for Manipulation Trajectory Transfer} \label{sec:deep} \vspace*{\sectionReduceBot} We use deep learning to find the most appropriate trajectory for the given point-cloud and natural language. Deep learning is mostly used for binary or multi-class classification or regression problem \cite{bengio2013representation} with a uni-modal input. We introduce a deep learning model that can handle three completely different modalities of point-cloud, language, and trajectory and solve a structural problem with lots of label noise. The original structured prediction problem ($f : \mathcal{P} \times \mathcal{L} \rightarrow \mathcal{T}$) is converted to a binary classification problem ($f : (\mathcal{P} \times \mathcal{L}) \times \mathcal{T} \rightarrow \{0, 1\}$). Intuitively, the model takes the input of point-cloud, language, and trajectory and outputs whether it is a good match (label $y=1$) or a bad match (label $y=0$). \noindent \textbf{Model.} Given an input of point-cloud, language, and trajectory, $x=((p,l),\tau)$, as shown at the bottom of Figure~\ref{fig:deep}, the goal is to classify as either $y=0$ or $1$ at the top. The first $h^1$ layer learns a separate layer of features for each modality of $x\; (=h^0)$ \cite{ngiam2011multimodal}. The next layer learns the relations between the input $(p,l)$ and the output $\tau$ of the original structured problem, combining two modalities at a time. The left combines point-cloud and trajectory and the right combines language and trajectory. The third layer $h^3$ learns the relation between these two combinations of modalities and the final layer $y$ represents the binary label. Every layer $h^i$ uses the rectified linear unit \cite{zeiler2013rectified} as the activation function: $$h^i = a(W^i h^{i-1} + b^i) \text{ where } a(\cdot)=max(0,\cdot)$$ with weights to be learned $W^i \in \mathds{R}^{M \times N}$, where $M$ and $N$ represent the number of nodes in $(i-1)$-th and $i$-th layer respectively. The logistic regression is used in last layer for predicting the final label $y$. The probability that $x = ((p,l),\tau)$ is a ``good match'' is computed as: $P(Y=1|x;W,b) = 1/(1 + e^{-(Wx+b)})$ \begin{wrapfigure}{r}{0.53\textwidth} \begin{center} \vskip -0.3in \includegraphics[width=0.5\textwidth]{deep_network.png} \end{center} \vskip -0.2in \caption{ \scriptsize \textbf{Our deep learning model} for transferring manipulation trajectory. Our model takes the input $x$ of three different modalities (point-cloud, language, and trajectory) and outputs $y$, whether it is a good match or bad match. It first learns features separately ($h^1$) for each modality and then \jf{learns the} relation ($h^2$) between input and output of the original structured problem. Finally, last hidden layer $h^3$ learns relations of all these modalities. } \label{fig:deep} \vskip -0.14in \end{wrapfigure} \noindent \textbf{Label Noise.} When data contains lots of noisy label (noisy trajectory $\tau$) due to crowd-sourcing, not all crowd-sourced trajectories should be trusted as equally appropriate as will be shown in Sec.~\ref{sec:experiments}. For every pair of input $(p,l)_i$, we have $\mathcal{T}_i=\{\tau_{i,1}, \tau_{i,2}, ..., \tau_{i,n_i} \}$, a set of trajectories submitted by the crowd for $(p,l)_i$. First, the best candidate label $\tau^*_i \in \mathcal{T}_i$ for $(p,l)_i$ is selected as one of the labels with the smallest average trajectory distance (Sec.~\ref{sec:metric}) to other labels: $$\tau^*_i = \arg\!\min_{\tau \in \mathcal{T}_i} \frac{1}{n_i} \sum_{j=1}^{n_i} \Delta(\tau, \tau_{i,j}) $$ We assume that at least half of the crowd tried to give a reasonable demonstration. Thus a demonstration with the smallest average distance to all other demonstrations must be a good demonstration. \jae{Once} we have found the most likely label $\tau^*_i$ for $(p, l)_i$, we give the label 1 (``good match'') to $((p, l)_i, \tau^*_i)$, making it the first positive example for the binary classification problem. Then we find more positive examples by finding other trajectories $\tau' \in \mathcal{T}$ such that $\Delta(\tau^*_i,\tau') < t_g$ where $t_g$ is a threshold determined by the expert. Similarly, negative examples are generated by finding trajectories $\tau' \in \mathcal{T}$ such that it is above some threshold $\Delta(\tau^*_i,\tau') > t_w$, where $t_w$ is determined by expert, and they are given label 0 (``bad match''). \noindent \textbf{Pre-training.} We use the stacked sparse de-noising auto-encoder (SSDA) to train weights $W^i$ and bias $b^i$ for each layer \cite{vincent2008extracting,zeiler2013rectified}. Training occurs layer by layer from bottom to top trying to reconstruct the previous layer using SSDA. To learn parameters for layer $i$, we build an auto-encoder which takes the corrupted output $\tilde{h}^{i-1}$ (binomial noise with corruption level $p$) of previous layer as input and minimizes the loss function \cite{zeiler2013rectified} with max-norm constraint \cite{srivastava2013improving}: { \vskip -.15in \small \begin{align*} & W^* = \arg\!\min_W \lVert \hat{h}^{i-1} - h^{i-1} \rVert^2_2 + \lambda \lVert h^{i} \rVert_1 \\ &\text{where}\quad \hat{h}^{i-1} = f(W^i h^i + b^i) \qquad h^i = f({W^i}^T \tilde{h}^{i-1} + b^i) \qquad \quad \;\; \tilde{h}^{i-1} = h^{i-1} X \\ & \qquad\quad\;\; \lVert W^i \rVert_2 \leq c \qquad \qquad \quad \; \;\;\; X \sim B(1,p) \end{align*} \vskip -.05in } \noindent \textbf{Fine-tuning.} The pre-trained neural network can be fine-tuned by minimizing the negative log-likelihood with the stochastic gradient method with mini-batches: $NLL = - \sum_{i=0}^{|D|} log(P(Y=y^{i}|x^{i},W,b))$. To prevent over-fitting to the training data, we used dropout \cite{hinton2012improving}, which randomly drops a specified percentage of the output of every layer. \noindent \textbf{Inference.} Given the trained neural network, inference step finds the trajectory $\tau$ that maximizes the output through sampling in the space of trajectory $\mathcal{T}$: $$\arg\!\max_{\tau' \in \mathcal{T}} P(Y=1|x=((p,l),\tau');W,b)$$ Since the space of trajectory $\mathcal{T}$ is infinitely large, based on our idea that we can transfer trajectories across objects, we only search trajectories that the model has seen in training phase. \noindent \textbf{Data pre-processing.} As seen in Sec.~\ref{sec:prob_form}, each of the modalities $(p, l, \tau)$ can have any length. Thus, we pre-process to make each fixed in length. We represent point-cloud $p$ of any arbitrary length as an occupancy grid where each cell indicates whether any point lives in the space it represents. Because point-cloud $p$ consists of only the part of an object which is limited in size, we can represent $p$ using two occupancy grids of size $10 \times 10 \times 10$ with different scales: one with each cell representing $1 \times 1 \times 1 (cm)$ and the other with each cell representing $2.5\times 2.5 \times 2.5 (cm)$. Each language instruction is represented as a fixed-size bag-of-words representation with stop words removed. Finally, for each trajectory $\tau \in \mathcal{T}$, we first compute its smooth interpolated trajectory $\tau_s \in \mathcal{T}_s$ (Sec.~\ref{sec:trajrep}), and then normalize all trajectories $\mathcal{T}_s$ to the same length while preserving the sequence of gripper status. \section{Features for Structural SVM and Latent Structural SVM} \label{sec:ssvm_features} \subsection{Structured SVM} For one of our baselines, we tested the structured support vector machines \cite{tsochantaridis2005large} using hand-coded features and our loss function from Section~\ref{sec:metric}. We solved the SVM optimization problem using the cutting plane method \cite{joachims2009cutting}. The joint feature map used, $\phi(p, l, \tau)$, consists of a set of different factors to capture different relations between point-cloud $p$, language $l$, and trajectory $\tau$: $\phi(p, l, \tau) = [\phi(\tau); \phi(p, \tau); \phi(l, \tau)]$. \textbf{Trajectory Features} $\phi(\tau)$: To represent trajectories of different length with small number of features, we take an unsupervised non-linear dimensionality reduction approach. For every trajectory $\tau$, we first compute smooth interpolated trajectory $\tau_s$ (Sec.~\ref{sec:trajrep}), and then normalize all trajectories $\mathcal{T}$ to same length while preserving the sequence of gripper status. Finally, a non-linear reduction method Isomap \cite{tenenbaum2000global} is used to project to a lower dimension of 15 dimensions using 5 nearest neighbors. \textbf{PointCloud-Trajectory Features} $\phi(p,\tau)$: To capture overall shape when trajectory is overlaid over a point-cloud of object part, we again utilize Isomap, performing non-linear dimensionality reduction. Prior to performing Isomap, we use the voxel-based cube representation to jointly represent point-cloud and trajectory where each voxel has a count number. Both the point-cloud and smooth interpolated trajectory is mapped to the corresponding voxel. \textbf{Language-Trajectory Features} $\phi(l,\tau)$: The tensor product of language feature vector and the trajectory feature is used to capture the relationship between the language and the trajectory: $\phi(p,\tau) = \phi(p) \otimes \phi(\tau)$. \subsection{Structured SVM with Latent Variable} Also as one of our baselines, we used structured SVM with latent variables $\mathcal{H}$ \cite{yu2009learning} using the joint feature map, $\phi(p, l, \tau, h)$, which consists of additional relations that involve the latent inner joint structure $h$: $\phi(p,l,\tau,h) = [\phi(\tau); \phi(p,\tau); \phi(l,\tau); \phi(l,h); \phi(p,\tau,h)]$. \textbf{Language-Joint Features} $\phi(l,h)$: To capture the relationship between the language and type of joint, which are both discrete, we take the tensor product of the language feature vector and the joint type feature vector: $\phi(p,h) = \phi(p) \otimes \phi(h)$. The language feature vector is a bag-of-words in the language, and the joint type feature vector is a bag of joint types (i.e. vector of length 3 indicating the existence of each joint type). The tensor product $\phi(p) \otimes \phi(h)$, which is a matrix of all possible products between two vectors, is reshaped into a vector. \textbf{PointCloud-Trajectory-Joint Features} $\phi(p,\tau,h)$: It captures how well point-cloud $p$ and trajectory $\tau$ are represented by the inner joint structure $h$. Rather than using whole trajectory, trajectory is broken down into the part where it actually interacts with the object $\tau_{inter}$ and the part where it does not $\tau_{no-inter}$. $\phi(p,\tau,h)$ is a concatenation of three types of features, each for a type of joint. For the ``revolute'' joint, it measures the percentage of $\tau_{inter}$ that is within the threshold defined by rotation axis, the maximum angular rotation occurred while following the threshold, and the average cosine similarity between the joint rotation axis and the rotation axis computed between trajectory waypoints. For the ``prismiatic'' joint, it measures the average cosine similarity between the extension axis and the displacement vector between waypoints. Finally, for the ``fixed'' joint, it simply checks whether $\tau_{no-inter}$ has collision with the background since it is important to approach the object from correct angle. \iffalse \subsection{Point Cloud Features} The point-cloud $p$ needs to be aligned with meaningful coordinate frame. The center of coordinate is placed at the mean of all points that belong to the part (e.g. ``knob''). The negative $z$-axis is aligned towards the gravity and the $x$-axis is aligned towards the direction of maximum variance using Principal Component Analysis (PCA) \cite{hsiao2010contact}. We have implemented following features after the point-cloud of object part has been axis-aligned: \begin{itemize} \item \textbf{Local Shape.} The local shape is captured by scatterness ($\lambda_0$), linearness ($\lambda_0 - \lambda_1$) and planarness ($\lambda_1 - \lambda_2$) \cite{koppula2011semantic}. \item \textbf{Histogram of Curvature.} The surface curvature is estimated at each point by the eigenvalues of the covariance matrix, $\lambda_0 / (\lambda_0 + \lambda_1 + \lambda_2)$ \cite{Rusu_ICRA2011_PCL}. The curvature at each point is binned into histogram. \item \textbf{Distribution of Points.} The bounding box of part is broken down into 3 x 3 x 2 bins and the distribution of points is computed. \item \textbf{Distribution of Nearby Points.} The points inside $10cm$ radius from the center are again used to compute distribution of points. \item \textbf{Ratio of Distribution.} After computing the distance of farthest point in each dimension, the ratio between all three dimensions are added as a feature. \end{itemize} \fi \iffalse With the interpolated smooth trajectory $\tau$, we build following histograms by comparing each interpolated point $\tau_i$ in trajectory $\tau$ to previous interpolated points $\tau_j$. \begin{itemize} \item \textbf{Histogram of Translation.} Translation between two points is discretized by direction of translation into octant as well as amount of displacement. \item \textbf{Histogram of Rotation Angle.} Rotation between two points is represented through angle between two quaternions computed by angle $ = q_2 \cdot q_1^{-1}$. \item \textbf{Joint Histogram of Translation and Rotation Angle.} The joint histogram of two features above, translation and rotation angle. \item \textbf{Histogram of Axis of Rotations.} Axis of rotation between two points is obtained by calculating the Hamilton product between quaternions of each point, and each axis of rotation is binned into one of 3-d octant. \end{itemize} \fi \section{Introduction} \vspace*{\sectionReduceBot} Consider the espresso machine in Figure~\ref{fig:robobarista_main} --- even without having seen the machine before, a person can prepare a cup of latte by visually observing the machine and by reading a natural language instruction manual. This is possible because humans have vast prior experience of manipulating differently-shaped objects that share common parts such as `handles' and `knobs'. \jin{In this work, our goal is to enable robots to generalize their manipulation ability to novel objects and tasks} (e.g. toaster, sink, water fountain, toilet, soda dispenser). Using a large knowledge base of manipulation demonstrations, we build an algorithm that infers \jin{an appropriate} manipulation trajectory given a point-cloud and natural language instructions. \begin{wrapfigure}{r}{0.6\textwidth} \begin{center} \vskip -.12in \includegraphics[width=0.55\textwidth]{pr2_espresso.jpg} \end{center} \vskip -.2in \caption{ \textbf{First encounter of an espresso machine} by our PR2 robot. Without ever having seen the machine before, given the language instructions and a point-cloud from Kinect sensor, our robot is capable of finding appropriate manipulation trajectories from prior experience using our deep learning model.} \label{fig:robobarista_main} \vspace{-.12in} \end{wrapfigure} The key idea in our work is that many objects designed for humans share many similarly-operated \emph{object parts} such as `handles', `levers', `triggers', and `buttons'; and manipulation motions can be transferred even among completely different objects if we represent motions with respect to \textit{object parts}. For example, even if the robot has never seen the `espresso machine' before, the robot should be able to manipulate it if it has previously seen similarly-operated parts in other objects such as `urinal', `soda dispenser', and `restroom sink' as illustrated in Figure~\ref{fig:espresso_transfer}. Object parts that are operated in similar fashion may not carry the same part name (e.g., `handle') but would rather have some similarity in their shapes \jin{that \jf{allows}} the motion to be transferred between completely different objects. If the sole task for the robot is to manipulate one specific espresso machine or just a few types of `handles', a roboticist could manually program the exact sequence to be executed. However, in human environments, there is a large variety in the types of object and their instances. \jae{Classification} of objects or object parts (e.g. `handle') alone does not provide enough information for robots to actually manipulate them. Thus, rather than relying on scene understanding techniques \cite{blaschko2008learning,li2009towards,girshick2011object}, we directly use 3D point-cloud for manipulation planning using machine learning algorithms. \jin{ Such machine learning algorithms require a large dataset for training. However, collecting such large dataset of expert demonstrations is very expensive as it requires joint physical presence of the robot, an expert, and the object to be manipulated. In this work, we show that we can crowd-source the collection of manipulation demonstrations to the public over the web through our Robobarista platform and still outperform the model trained with expert demonstrations. } \jin{The key challenges in our problem are in designing features and a learning model that integrates three completely different modalities of data (point-cloud, language and trajectory), and in handling significant amount of noise in crowd-sourced manipulation demonstrations. Deep learning has made impact in related application areas (e.g., vision \cite{krizhevsky2012imagenet,bengio2013representation}, natural language processing \cite{socher2011semi}). In this work, we present a deep learning model that can handle large noise in labels, with a new architecture that learns relations between the three different modalities.} \jin{Furthermore, in contrast to previous approaches based on learning from demonstration (LfD) that learn a mapping from a state to an action \cite{argall2009survey}, our work complements LfD as we focus on the entire manipulation motion (as opposed to a sequential state-action mapping).} \iffalse Such learning algorithms require a large dataset for training. However, collecting such large dataset of expert demonstrations is very expensive as it requires the presence of the robot, an expert, and the object to be manipulated. In this work, we show that we can crowd-source the demonstrations of manipulation to the public over the web through our Robobarista platform and outperform the model trained with expert demonstrations. Furthermore, in contrast to previous approaches based on learning from demonstration (LfD) that learn a mapping from a state to an action \cite{argall2009survey}, our work complements LfD as we focus on the entire manipulation motion (as opposed to a sequential state-action mapping). \fi In order to validate our approach, we have collected a large dataset of \emph{116 objects} with \emph{250 natural language instructions} for which there are \emph{1225 crowd-sourced manipulation trajectories} from 71 non-expert users via our Robobarista web platform ({\small \url{http://robobarista.cs.cornell.edu}}). We also present experiments on our robot using our approach. In summary, the key contributions of this work are: \begin{itemize \item a novel approach to manipulation planning via \textit{part-based transfer} between different objects \jin{that allows manipulation of novel objects}, \item incorporation of \textit{crowd-sourcing} to manipulation planning, \item introduction of \textit{deep learning model} that handles three modalities with noisy labels from crowd-sourcing, and \item contribution of the first large manipulation dataset and experimental evaluation on this dataset. \end{itemize} \begin{figure*}[tb] \begin{center} \vskip -.06in \includegraphics[width=\textwidth,trim={3mm 0 1mm 0}]{figure2_isrr_v2.png} \end{center} \vskip -.23in \caption{\textbf{Object part and natural language instructions input to manipulation trajectory as output.} Objects such as the espresso machine consist of distinct object parts, each of which requires a distinct manipulation trajectory for manipulation. For each part of the machine, we can re-use a manipulation trajectory that was used for some other object with similar parts. So, for an object part in a point-cloud (each object part colored on left), we can find a trajectory used to manipulate some other object (labeled on the right) that can be \emph{transferred} (labeled in the center). With this approach, a robot can operate a new and previously unobserved object such as the `espresso machine', by successfully transferring trajectories from other completely different but previously observed objects. Note that the input point-cloud is very noisy and incomplete (black represents missing points).} \label{fig:espresso_transfer} \vskip -.2in \end{figure*} \subsection{Model Formulation} We have defined our problem as a structured prediction task of outputting trajectory $\mathcal{T}$ given the point-cloud $\mathcal{P}$ and the language instruction $\mathcal{L}$. This prediction task has complex and infinitely large output space since the trajectories could be of arbitrary length and could follow any arbitrary path. We use structured support vector machine \cite{tsochantaridis2005large,yu2009learning}, with latent variable $\mathcal{H}$ to learn the discriminant function $F: \mathcal{P} \times \mathcal{L} \times \mathcal{T} \times \mathcal{H} \rightarrow \mathbb{R}$. Learning the discriminant function $F$ results in the following structured support vector machine with latent variable optimization problem: { \vskip -.2in \small \begin{align*} \min_{w, \xi}& \frac{1}{2}||w||^2 + C \sum_{i = 1}^{n} \xi_i \qquad \\ & s.t. \hspace{2.5mm} \forall i: \hspace{2mm} \xi_i \geq 0 \qquad \forall i, \hspace{1.5mm} \forall \tau \in \mathcal{T}_o : \max_{h \in \mathcal{H}} \;\langle w,\phi(p_i, l, \tau_i, h) \rangle - \max_{\hat{h} \in \mathcal{H}} \;\langle w,\phi(p_i, l, \tau, \hat{h}) \rangle \geq \Delta(\tau_i,\tau) - \xi_i \end{align*} \vskip -.1in } where $p_i$ is the object part point-cloud and $\mathcal{T}_o$ is the set of all trajectories the robot has observed previously based on our assumption that trajectories could be reused. \textbf{Latent Variable.} How an object is operated given its particular shape is due to its internal structure. Rather than trying to explicitly learn the articulated structure of the object \cite{sturm2011probabilistic,burka2013probabilistic}, we treat this internal joint structure as a latent variable $\mathcal{H}$. Our latent variable $\mathcal{H}$ consists of seven variables $(j_t, j_c, j_a) = (j_t, (j_{cx}, j_{cy}, j_{cz}), (j_{ax}, j_{ay}, j_{az}))$ where joint type $j_t \in \{\text{`revolute', `prismatic', `fixed'} \}$. The center of joint and axis of joint $(j_c, j_a)$ are only used for `revolute' and `prismatic' joints. For the `revolute' joint type, $(j_c, j_a)$ represents the center of rotation and axis of rotation (a unit vector) respectively. And for `prismatic' joint type, they represent an offset and a unit vector for the direction of extension. Some definitions of joints are borrowed from \citep{sturm2011probabilistic,burka2013probabilistic} and Figure~\ref{fig:jointex} shows an example of `revolute' and `prismatic' joints. \textbf{Learning.} We used the cutting plane method \cite{yu2009learning} to solve the above optimization problem. The loss function $\Delta$ and joint feature map $\phi(p,\tau)$ will be defined in following Section~\ref{sec:metric} and Section~\ref{sec:features} respectively. Finding the cutting plane at each iteration involves finding trajectory $\tau$ and the hidden variable $h$ that maximizes: $\max_{\tau, h} \langle w,\phi(p_i, \tau_i,h) \rangle$ and $\max_{\tau, h} \langle w,\phi(p_i, \tau_i,h) \rangle + \Delta(\tau_i,\tau)$. Our latent variable $h$ is continuous variables except for the type of joint. We used Limited-memory BFGS \cite{zhu1997algorithm} to find the value of $h$ (the joint location and the joint direction). \textbf{Inference.} Given the discriminant function $F$, inference step outputs a trajectory $\tau$ that maximizes the discriminant function: $f(p; \textbf{w}) = \arg\!\max_{\tau \in \mathcal{T}_o} F(p, \tau; \textbf{w})$. Since the space of trajectory $\mathcal{T}$ can be infinitely large, again based on our idea that we can transfer trajectories between objects, we only search through trajectories that it has seen before ($\mathcal{T}_o$) during the training phase. \iffalse And we will output a prediction $\textbf{y} \in \mathcal{Y}$ (seen during training phase) that maximizes $F$ for a given $\textbf{x} \in \mathcal{X}$: However, because space of trajectory $Y$ can be infinitely large, we will only consider trajectory $\textbf{y} \in Y$ that we have seen during training phase. The loss function will be same as evaluation metric that is used to evaluate final result. And, because our size of training data will limit the search space in $Y$, maximization steps become feasible. \fi \section{Modified Hausdorff Distance for Manipulation Motion Plan} \vspace*{\sectionReduceTop} \section{Loss Function for Manipulation Trajectory} \label{sec:metric} \vspace*{\sectionReduceBot} Prior metrics for trajectories consider only their translations (e.g. \cite{koppula2013_anticipatingactivities}) and not their rotations \emph{and} gripper status. \jae{We propose a new measure, which uses dynamic time warping, for evaluating manipulation trajectories.} This measure non-linearly warps two trajectories of arbitrary lengths to produce a matching, and cumulative distance is computed \jin{as the} sum of cost of all matched waypoints. The strength of this measure is that weak ordering is maintained among matched waypoints and that every waypoint contributes to the cumulative distance. For two trajectories of arbitrary lengths, $\tau_A = \{\tau_A^{(i)}\}_{i=1}^{m_A}$ and $\tau_B = \{\tau_B^{(i)}\}_{i=1}^{m_B}$ , we define matrix $D \in \mathbb{R}^{m_A \times m_B}$, where $D(i, j)$ is the cumulative distance of an \jae{optimally-warped} matching between trajectories up to index $i$ and $j$, respectively, of each trajectory. The first column and the first row of $D$ is initialized as $D(i,1) = \sum_{k=1}^{i}c(\tau_A^{(k)}, \tau_B^{(1)})\, \forall i \in [1, m_A]$ and $D(1,j) = \sum_{k=1}^{j}c(\tau_A^{(1)}, \tau_B^{(k)})\, \forall j \in [1, m_B]$, where $c$ is a local cost function between two waypoints (discussed later). The rest of $D$ is completed using dynamic programming: $D(i,j) = c(\tau_A^{(i)}, \tau_B^{(j)}) + \min\{D(i-1, j-1), D(i-1,j), D(i, j-1)\}$ Given the constraint that $\tau_A^{(1)}$ is matched to $\tau_B^{(1)}$, the formulation ensures that every waypoint contributes to the final cumulative distance $D(m_A, m_B)$. Also, given a matched pair $(\tau_A^{(i)}, \tau_B^{(j)})$, no waypoint preceding $\tau_A^{(i)}$ is matched to a waypoint succeeding $\tau_B^{(j)}$, encoding weak ordering. The pairwise cost function $c$ between matched waypoints $\tau_A^{(i)}$ and $\tau_B^{(j)}$ is defined: { \vskip -.2in \small \begin{align*} c(\tau_A^{(i)}, \tau_B^{(j)};\alpha_T, \alpha_R, \beta, \gamma) = w(\tau_A^{(i)};\gamma)& w(\tau_B^{(j)};\gamma) \bigg(\frac{d_T(\tau_A^{(i)}, \tau_B^{(j)})}{\alpha_T} + \frac{d_R(\tau_A^{(i)}, \tau_B^{(j)})}{\alpha_R}\bigg) \bigg(1 + \beta d_G(\tau_A^{(i)}, \tau_B^{(j)}) \bigg) \\ \text{where} \hspace{5 mm} d_T(\tau_A^{(i)},\tau_B^{(j)}) &= ||(t_x, t_y, t_z)_{A}^{(i)} - (t_x, t_y, t_z)_{B}^{(j)}||_2 \\ d_R(\tau_A^{(i)},\tau_B^{(j)}) &= \text{angle difference between $\tau_{A}^{(i)}$ and $\tau_B^{(j)}$} \\ \qquad d_G(\tau_A^{(i)},\tau_B^{(j)}) &= \mathbbm{1}(g_A^{(i)}=g_B^{(j)}) \\ w(\tau^{(i)}; \gamma) &= exp(-\gamma \cdot ||\tau^{(i)}||_2) \end{align*} \vskip -.05in } \noindent The parameters $\alpha, \beta$ are for scaling translation and rotation errors, and gripper status errors, respectively. $\gamma$ weighs the importance of a waypoint based on its distance \jin{to} the object part. Finally, as trajectories vary in length, we normalize $D(m_A, m_B)$ by the number of waypoint pairs that contribute to the cumulative sum, $|D(m_A, m_B)|_{path^*}$ (i.e. the length of the optimal warping path), giving the final form: \begin{align*} distance(\tau_A, \tau_B) = \frac{D(m_A, m_B)}{|D(m_A, m_B)|_{path^*}} \end{align*} This distance function is used for noise-handling in our model and as the final evaluation metric. \section{Related Work} \vspace*{\sectionReduceBot} \noindent \textbf{Scene Understanding.} There has been great advancement in scene understanding \cite{li2009towards,koppula2011semantic,wu2014_hierarchicalrgbd}, in human activity detection \cite{sung_rgbdactivity_2012,Hu2014humanact}, and in features for RGB-D images and point-clouds \cite{socher2012convolutional,lai_icra14}. And, similar to our idea of using part-based transfers, the deformable part model \cite{girshick2011object} was effective in object detection. \jae{However, classification of objects, object parts, or human activities alone does not provide enough information for a robot to reliably plan manipulation.} \jin{Even a simple category such as ‘kitchen sinks’ has so much variation in its instances, each differing in how it is operated: pulling the handle upwards, pushing upwards, pushing sideways, and so on.} On the other hand, direct perception approach skips the intermediate object labels and directly perceives affordance based on the shape of the object \cite{gibson1986ecological,kroemer2012kernel}. It focuses on detecting the part known to afford certain action such as `pour' given the object, while we focus on predicting the correct motion given the object part. \noindent \textbf{Manipulation Strategy.} For highly \jin{specific} tasks, many works manually sequence different controllers to accomplish complicated tasks such as baking cookies \cite{bollini2011bakebot} and folding the laundry \cite{miller2012geometric}, or focus on learning specific motions such as grasping \cite{kehoe2013cloud} and opening doors \cite{endres2013learning}. \jin{Others focus on learning to sequence different movements \cite{sung_synthesizingsequences_2014, misra2014tellme} but assume that there exist perfect controllers such as \textit{grasp} and \textit{pour}.} For a more general task of manipulating new instances of objects, previous approaches rely on finding articulation \cite{sturm2011probabilistic,pillai2014articulated} or using interaction \cite{katz2013interactive}, but they are limited by tracking performance of a vision algorithm. Many objects that humans operate daily have parts such as ``knob'' that are small, which leads to significant occlusion as manipulation is demonstrated. Another approach using part-based transfer between objects has been shown to be successful for grasping \cite{dang2012semantic,detry2013learning}. We extend this approach and introduce a deep learning model that enables part-based transfer of \jin{\textit{trajectories}} by automatically learning relevant features. Our focus is on the generalization of manipulation trajectory via part-based transfer using point-clouds without knowing objects a priori and without assuming any of the sub-steps (`approach', `grasping', and `manipulation'). \noindent \textbf{Learning from Demonstration (LfD).} The most successful approach for teaching robots \jae{tasks}, such as helicopter maneuvers \cite{abbeel2010autonomous} or table tennis \cite{mulling2013learning}, has been based on LfD \cite{argall2009survey}. Although LfD allows end users to demonstrate the task by simply taking the robot arms, it focuses on learning individual actions and separately relies on high level task composition \cite{mangin2011unsupervised,daniel2012learning} or is often limited to previously seen objects \cite{phillipslearning, pastor2009learning}. \jin{We} believe that learning a single model for an action like ``turning on'' is impossible because human environment has many variations. Unlike learning a model from demonstration, instance-based learning \cite{aha1991instance,forbes2014robot} replicates one of the demonstrations. Similarly, we directly transfer one of the demonstrations but focus on generalizing manipulation planning to completely new objects, enabling robots to manipulate objects they have never seen before. \noindent \textbf{Deep Learning.} There has been great success with deep learning, especially in the domains of vision and natural language processing (e.g. \cite{krizhevsky2012imagenet,socher2011semi}). In robotics, deep learning has previously been successfully used for detecting grasps on multi-channel input of RGB-D images \cite{lenz2013deep} and for classifying terrain from long-range vision \cite{hadsell2008deep}. Deep learning can also solve multi-modal problems \cite{ngiam2011multimodal,lenz2013deep} and structured problems \cite{socher2011parsing}. Our work builds on prior works and extends neural network to handle three modalities which are of completely different data type (point-cloud, language, and trajectory) while handling lots of label-noise originating from crowd-sourcing. \noindent \textbf{Crowd-sourcing.} Teaching robots how to manipulate different objects has often relied on experts \cite{argall2009survey,abbeel2010autonomous}. Among previous efforts to scale teaching to the crowd \cite{crick2011human,tellex2014asking,jainsaxena2015_learningpreferencesmanipulation}, \citet{forbes2014robot} employs a similar approach towards crowd-sourcing but collects multiple instances of similar table-top manipulation with same object, and others build web-based platform for crowd-sourcing manipulation \cite{toris2013robotsfor,toris2014robot}. These approaches either depend on the presence of an expert (due to a required special software) or require a real robot at a remote location. Our Robobarista platform borrows some components of \cite{alexander2012robot}, but works on any standard web browser with OpenGL support and incorporates real point-clouds of various scenes. \section{Experiments} \label{sec:experiments} \vspace*{\sectionReduceBot} \noindent \textbf{Data.} In order to test our model, we have collected a dataset of 116 point-clouds of objects with 249 object parts (examples shown in Figure~\ref{fig:data_ex}). There are also a total of 250 natural language instructions (in 155 manuals).\footnote{Although not necessary for training our model, we also collected trajectories from the expert for evaluation purposes.} Using the crowd-sourcing platform Robobarista, we collected 1225 trajectories for these objects from 71 non-expert users on the Amazon Mechanical Turk. After a user is shown a 20-second instructional video, the user first completes a 2-minute tutorial task. At each session, the user was asked to complete 10 assignments where each consists of an object and a manual to be followed. For each object, we took raw RGB-D images with the Microsoft Kinect sensor and stitched them using Kinect Fusion \cite{izadi2011kinectfusion} to form a denser point-cloud in order to incorporate different viewpoints of objects. Objects range from kitchen appliances such as `stove', `toaster', and `rice cooker' to `urinal', `soap dispenser', and `sink' in restrooms. The dataset will be made available at {\small \url{http://robobarista.cs.cornell.edu}} \input{baselines} \input{result_table} \vspace*{\subsectionReduceTop} \subsection{Results and Discussions} \vspace*{\subsectionReduceBot} We evaluated all models on our dataset using \emph{5-fold cross-validation} and the \jae{results are} in Table~\ref{tab:results}. Rows list the models we tested including our model and baselines. Each column shows one of three evaluations. First two use dynamic time warping for manipulation trajectory (DTW-MT) from Sec.~\ref{sec:metric}. The first column shows averaged DTW-MT for each instruction manual consisting of one or more language instructions. The second column shows averaged DTW-MT for every test pair $(p,l)$. As DTW-MT values are not intuitive, we added the extra column ``accuracy'', which shows the percentage of \jf{transferred} trajectories with DTW-MT value less than $10$. Through expert surveys, we found that when DTW-MT of manipulation trajectory is less than $10$, the robot came up with a reasonable trajectory and will very likely be able to accomplish the given task. \noindent \textbf{Can manipulation trajectory be transferred from completely different objects?} Our full model performed $60.0\%$ in accuracy (Table~\ref{tab:results}), outperforming the chance as well as other baseline algorithms we tested on our dataset. Fig.~\ref{fig:success_trans} shows two examples of successful transfers and one unsuccessful transfer by our model. In the first example, the trajectory for pulling down on a cereal dispenser is transferred to a coffee dispenser. Because our approach to trajectory representation is based on the principal axis (Sec.~\ref{sec:transfer_adapt}), even though cereal and coffee dispenser handles are located and oriented differently, the transfer is a success. The second example shows a successful transfer from a DC power supply to a slow cooker, which have ``knobs'' of similar shape. The transfer was successful despite the difference in instructions (``Turn the switch..'' and ``Rotate the knob..'') and object type. The last example of Fig.~\ref{fig:success_trans} shows an unsuccessful transfer. Despite the similarity in two instructions, transfer was unsuccessful because the grinder's knob was facing towards the front and the speaker's knob was facing upwards. We fixed the $z$-axis along gravity because point-clouds are noisy and gravity can affect some manipulation tasks, but a more reliable method for finding the object coordinate frame and a better 3-D sensor should allow for more accurate transfers. \begin{figure*}[tb] \begin{center} \includegraphics[width=\textwidth]{transfer3_isrr.png} \end{center} \vskip -0.2in \caption{ \textbf{Examples of successful and unsuccessful transfers} of manipulation trajectory from left to right using our model. In first two examples, though the robot has never seen the `coffee dispenser' and `slow cooker' before, the robot has correctly identified that the trajectories of `cereal dispenser' and `DC power supply', respectively, can be used to manipulate them. } \label{fig:success_trans} \vskip -.1in \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[width=\textwidth]{pr2_1.jpg} \end{center} \vskip -.22in \caption{ \textbf{Examples of transferred trajectories} being executed on PR2. On the left, PR2 is able to rotate the `knob' to turn the lamp on. On the right, using two transferred trajectories, PR2 is able to hold the cup below the `nozzle' and press the `lever' of `coffee dispenser'. } \label{fig:robotic_exp} \vskip -.1in \end{figure*} \noindent \textbf{Does it ensure \jf{that the} object is actually correctly manipulated?} We do not claim that our model can find and execute manipulation trajectories for all objects. However, for a large fraction of objects which the robot has never seen before, our model outperforms other models in finding correct manipulation trajectories. The contribution of this work is in the novel approach to manipulation planning which enables robots to manipulate objects they have never seen before. \jin{\jae{For some of the objects,} correctly executing a transferred manipulation trajectory may require incorporating visual and force feedbacks \cite{wieland2009combining,vina2013predicting} \jae{in order for the execution} to adapt exactly to the object as well as find a collision-free path \cite{stilman2007task}.} \noindent \textbf{Can we crowd-source the teaching of manipulation trajectories?} When we trained our full model with expert demonstrations, which were collected for evaluation purposes, it performed \jf{at} $53.1\%$ compared to $60.0\%$ \jf{by our model} trained with crowd-sourced data. Even with the significant noise in the label as \jin{shown} in last two examples of Fig.~\ref{fig:data_ex}, we believe that our model with crowd demonstrations performed better because our model can handle noise and because deep learning benefits from having a larger amount of data. Also note that all of our crowd users are real non-expert users from Amazon Mechanical Turk. \noindent \textbf{Is segmentation required for the system?} In vision community, even with the state-of-the-art techniques \cite{felzenszwalb2010object,krizhevsky2012imagenet}, detection of `manipulatable' object parts such as `handle' and `lever' in point-cloud is by itself a challenging problem \cite{lai_icra14}. Thus, we rely on human expert to pre-label parts of object to be manipulated. The point-cloud of the scene is over-segmented into thousands of supervoxels, from which the expert chooses the part of the object to be manipulated. \jin{Even with the \jf{input of the} expert, segmented point-clouds are still extremely noisy because of the \jae{poor performance of the sensor} on object parts with glossy surfaces.} \noindent \textbf{Is intermediate object part labeling necessary?} The \emph{Object Part Classifier} performed at $23.3\%$, even though the multiclass SVM for finding object part \jf{label} achieved over $70\%$ accuracy in five major classes of object parts (`button', `knob', `handle', `nozzle', `lever') among 13 classes. \jin{Finding the part label is not sufficient for finding a good manipulation trajectory because of large variations.} Thus, our model which does not need \jf{part labels} outperforms the \emph{Object Part Classifier}. \noindent \textbf{Can features be hand-coded? What kinds of features did the network learn?} For both SSVM and LSSVM models, we experimented with several state-of-the-art features for many months, and \jin{they} gave $40.8\%$. The \textit{task similarity} method gave a better result of $53.7\%$, but it requires access to all of the raw training data (all point-clouds and language) at test time, which leads to heavy computation at test time and requires a large storage as the size of training data increases. While it is extremely difficult to find a good set of features for three modalities, our deep learning model which does not require hand-designing of features learned features at the top layer $h^3$ such as \jin{those} shown in Fig.~\ref{fig:learned_rep}. The left shows a node that correctly associated point-cloud (axis-based coloring), trajectory, and language for the \jin{motion} of turning a knob clockwise. \jin{The right shows a node that correctly associated for the motion of pulling the handle.} Also, as shown for two other baselines using deep learning, when modalities were simply concatenated, it gave $51.9\%$, and when noisy labels were not handled, it \jf{gave only} $49.7\%$. Both results show that our model can handle noise from crowd-sourcing while learning relations between three modalities. \vspace*{\subsectionReduceTop} \subsection{Robotic Experiments} \vspace*{\subsectionReduceBot} As the PR2 robot stands in front of the object, the robot is given a natural language instruction and segmented point-cloud. \jf{Using our algorithm, manipulation trajectories to be transferred were found for the given point-clouds and languages.} Given the trajectories which are defined as set of waypoints, the robot followed the trajectory by impedance controller (\texttt{ee\_cart\_imped}) \cite{bollini2011bakebot}. Some of the examples of successful execution on PR2 robot are shown in Figure~\ref{fig:robotic_exp} and in video at the project website: {\small \url{http://robobarista.cs.cornell.edu}} \begin{figure}[tb] \begin{center} \includegraphics[width=0.9\columnwidth]{learned_visual.png} \end{center} \vskip -0.2in \caption{ \textbf{Visualization} of a sample of learned high-level feature (two nodes) at last hidden layer $h^3$. The point-cloud in the picture is given arbitrary axis-based color for visualization purpose. The left shows a node $\#1$ at layer $h^3$ that learned to (``turn'', ``knob'', ``clockwise'') along with relevant point-cloud and trajectory. The right shows a node $\#51$ at layer $h^3$ that learned to ``pull'' handle. The visualization is created by selecting a set of words, a point-cloud, \jf{and} a trajectory that \jf{maximize} the activation at each layer and passing the highest activated set of inputs to higher level. } \label{fig:learned_rep} \vskip -.2in \end{figure} \section{Robobarista: crowd-sourcing platform} \vspace*{\sectionReduceBot} \begin{figure*}[tb] \begin{center} \includegraphics[width=\textwidth]{robobarista_system_isrr.png} \end{center} \vskip -0.2in \caption{ \textbf{Screen-shot of Robobarista,} the crowd-sourcing platform running on Chrome browser. We have built Robobarista platform for collecting a large number of crowd \jf{demonstrations} for teaching the robot. } \label{fig:robobarista} \end{figure*} \begin{figure*}[tb] \begin{center} \includegraphics[width=\textwidth]{data_figure_isrr.png} \end{center} \vskip -0.2in \caption{ \textbf{Examples from our dataset,} \jf{each of which consists} of a natural language instruction (top), an object part in point-cloud representation (highlighted), and a manipulation trajectory (below) collected via Robobarista. Objects range from kitchen appliances such as stove and rice cooker to urinals and sinks in restrooms. As our trajectories are collected from non-experts, they vary in quality from being likely to complete the manipulation task successfully (left of dashed line) to being unlikely to do so successfully (right of dashed line). } \label{fig:data_ex} \vskip -0.1in \end{figure*} In order to collect a large number of manipulation demonstrations from the crowd, we built a crowd-sourcing web platform that we call Robobarista (see Fig.~\ref{fig:robobarista}). It provides a virtual environment where non-expert users can teach robots via a web browser, without expert guidance or physical presence with a robot and a target object. The system simulates a situation where the user encounters a previously unseen target object and a natural language instruction manual for its manipulation. Within the web browser, users are shown a point-cloud in the 3-D viewer on the left and a \textit{manual} on the right. A manual may involve several instructions, such as ``Push down and pull the handle to open the door''. The user's goal is to demonstrate how to manipulate the object in the scene for each instruction. The user starts by selecting one of the instructions on the right to demonstrate (Fig.~\ref{fig:robobarista}). Once selected, the target object part is highlighted and the trajectory \textit{edit bar} appears below the 3-D viewer. Using the \textit{edit bar}, which works like a video editor, the user can playback and edit the demonstration. Trajectory representation as a set of waypoints (Sec.~\ref{sec:trajrep}) is directly shown on the \textit{edit bar}. The bar shows not only the set of waypoints (red/green) but also the interpolated waypoints (gray). The user can click the `play' button or hover the cursor over the edit bar to examine the current demonstration. The blurred trail of the current trajectory (\textit{ghosted}) demonstration is also shown in the 3-D viewer to show its full expected path. Generating a full trajectory from scratch can be difficult for non-experts. Thus, similar to \citet{forbes2014robot}, we provide a trajectory that the system has already seen for another object as the initial starting trajectory to edit.\footnote{We have made sure that it does not initialize with trajectories from other folds to keep \textit{5-fold cross-validation} in experiment section valid.} In order to simulate a realistic experience of manipulation, instead of simply showing a static point-cloud, we have overlaid CAD models for parts such as `handle' so that functional parts actually move as the user tries to manipulate the object. A demonstration can be edited by: 1) modifying the position/orientation of a waypoint, 2) adding/removing a waypoint, and 3) opening/closing the gripper. Once a waypoint is selected, the PR2 gripper is shown with six directional arrows and three rings. Arrows are used to modify position while rings are used to modify the orientation. To add extra waypoints, the user can hover the cursor over an interpolated (gray) waypoint on the \textit{edit bar} and click the plus(+) button. To remove an existing waypoint, the user can hover over it on the \textit{edit bar} and click minus(-) to remove. As modification occurs, the edit bar and ghosted demonstration are updated with a new interpolation. Finally, for editing the status (open/close) of the gripper, the user can simply click on the gripper. For broader accessibility, all functionality of Robobarista, including 3-D viewer, is built using Javascript and WebGL. \section{Conclusion} \vspace*{\sectionReduceBot} In this work, we introduced a novel approach to predicting manipulation trajectories via part based transfer, which allowed robots to successfully manipulate objects it has never seen before. We formulated it as a structured-output problem and presented a deep learning model capable of handling three completely different modalities of point-cloud, language, and trajectory while dealing with large noise in the manipulation demonstrations. We also designed a crowd-sourcing platform Robobarista that allowed non-expert users to easily give manipulation demonstration over the web. Our deep learning model was evaluated against many baselines on a large dataset of 249 object parts with 1225 crowd-sourced demonstrations. In future work, we plan to share the learned model using the knowledge-engine, RoboBrain \cite{saxena_robobrain2014}. \vskip -2in \subsubsection*{Acknowledgments} \vspace*{\sectionReduceBot} We thank Joshua Reichler for building the initial prototype of the crowd-sourcing platform. We thank Ian Lenz and Ross Knepper for useful discussions. This research was funded in part by Microsoft Faculty Fellowship (to Saxena), NSF Career award (to Saxena) and Army Research Office. { \renewcommand\refname{\vskip -1.5cm} \scriptsize
{ "timestamp": "2015-09-22T02:01:35", "yymm": "1504", "arxiv_id": "1504.03071", "language": "en", "url": "https://arxiv.org/abs/1504.03071" }
\section{Introduction: weighted Hurwitz numbers } A new method for constructing parametric families of 2D Toda $\tau$-functions \cite{Ta, Takeb, UTa} of hypergeometric type \cite{OrS} that serve as generating functions for various types of weighted Hurwitz numbers was developed in \cite{GH1, GH2, H1, H2, HO2}. This was originally inspired by the work of Pandharipande \cite{P} and Okounkov \cite{Ok}, which first used a special case of KP and 2D-Toda $\tau$-functions as generating functions for single and double for Hurwitz numbers when all branchings other than the ones specified at one or two points rare equired to be simple, and the weighting for these is uniform. The general case gives infinite parametric families of weighted enumerations of $n$-fold branched coverings of the Riemann sphere or, equivalently, weighted paths in the Cayley graph of the symmetric group $S_n$ generated by transpositions. They are derived from parametric families of weight generating functions by defining associated symmetric functions of an arbitrary number of indeterminates multiplicatively. Replacing one set of indeterminates in the Cauchy-Littlewood generating function \cite{Mac} by the commuting elements of the group algebra introduced by Jucys \cite{Ju} and Murphy \cite{Mu}, while evaluating the other set at parameter values defining the weightings provides parametric families of elements of the center $\mathbf{Z}(\mathbf{C}[S_n])$ of the group algebra. Expanding these as sums over products of dual bases of the algebra of symmetric functions, and applying them multipicatively to the basis of the center of the group algebra consisting of cycle-type sums $\{C_\mu\}$ leads to an identification of both the geometrical significance of the weighted Hurwitz numbers and the combinatorial one. It was shown in \cite{GH1, GH2, HO2} that all previously studied examples of generating functions for Hurwitz numbers \cite{P, Ok, GGN1, GGN2, BEMS, Z, KZ, AMMN, AC1, AC2} may be viewed as special cases of this general construction, and several new examples were introduced, including three forms of quantum Hurwitz number \cite{GH2} and their multispecies generalization \cite{H1}. Other notions of weighted or quantum Hurwitz numbers have also been considered, including those for branched coverings of $\mathbf{R} \mathbf{P}^2$, whose generating functions are BKP $\tau$-functions \cite{NOr1, NOr2}, and Hurwitz numbers enumerating factorization of Singer cycles \cite{LRS}. In the following, we extend the special class of weighted Hurwitz numbers introduced in \cite{GH1, GH2} by introducing an additional pair $(q,t)$ of deformation parameters in the definition of the weight generating functions. The result is to replace the Cauchy-Littlewood formula, which generates dual bases in the algebra of the symmetric functions with respect to the standard scalar product pairing by the corresponding one for MacDonald polynomials \cite{Mac}. In \autoref{hypergeometric_tau} the general method is developed, and used to derive an infinite parametric family of 2D Toda $\tau$-functions of hypergeometric type depending not only on the previously introduced classical weight determining parameters, but also the additional pair $(q,t)$ of quantum deformation parameters entering in the definition of the scalar product. These are shown to be generating functions for infinite parametric families of quantum weighted Hurwitz numbers when expanded in the basis of products of power sum symmetric functions. The combinatorial significance, in terms of quantum weighted paths in the Cayley graph, is derived in \autoref{combinatorial_hurwitz}, and the geometric one, in terms of quantum weighted enumeration of branched covers, in \autoref{geometric_hurwitz}. \autoref{examples} is devoted to various examples obtained by specialization of the parameters and taking limits. It is shown how all previously studied cases of weighted Hurwitz numbers, whether classical or quantum, may be recovered within this more general setting, and a number of new families are added, including those associated to Hall-Littlewood and Jack polynomials. \section{2D Toda $\tau$-functions, MacDonald polynomials and quantum weighted Hurwitz numbers} \label{hypergeometric_tau} \subsection{The generating function for Macdonald polynomials} Following \cite{Mac} (sec. VI. 2), we define a $2$-parameter family of scalar products $(\ , \ )_{(q, t)}$ on the algebra $\Lambda$ of symmetric functions in an infinite number of indeterminates ${\bf x} = (x_1, x_2, \dots )$, such that the power sum symmetric functions are orthogonal; \begin{equation} ( p_\lambda, p_\mu)_{(q, t)} := z_\mu(q,t) \delta_{\mu, \nu} \end{equation} where \begin{equation} p_\lambda := \prod_{i=1}^{\ell(\lambda} p_{\lambda_i} \in \Lambda, \qquad p_j := \sum_{i}x_i^j, \quad j\in \mathbf{N} \end{equation} are the power sum symmetric functions corresponding to the integer partition \begin{equation} \lambda= \lambda_1 \ge \cdots \ge \lambda_{\ell(\lambda)} \end{equation} of length $\ell(\lambda)$. The normalization factor $z_\mu(q,t)$ is defined as \begin{equation} z_\mu(q,t) := z_\mu n_\mu (q, t), \quad z_\mu := \prod_{i=1}^{\mu} i^{m_i(\mu)}(m_i(\mu))! . \end{equation} where $m_i(\mu)$ is the number of parts of $\mu$ equal to $i$ and \begin{equation} n_\mu(q, t) := \prod_{i=1}^{\ell(\mu)} {1 - q^{\mu_i} \over 1- t^{\mu_i}}. \end{equation} The Macdonald polynomials $\{P_\lambda({\bf x}, q, t)\}$ may be defined \cite[Chapt. VI]{Mac} as the unique basis for $\Lambda$ determined by two conditions: orthogonality with respect to the scalar product $( \, , \, )_{(q, t)}$ \begin{equation} (P_\lambda, P_\mu)_{(q, t)} = 0 \quad \text{ if } \lambda \neq \mu, \end{equation} and lower triangular normalized decomposition (with respect to the dominance partial ordering \cite[Sec. I.1, pg. 7]{Mac}) in the basis $\{ m_\lambda\}$ of monomial symmetric functions \begin{equation} P_\lambda({\bf x}, q,t ) = m_\lambda + \sum_{\mu < \lambda} K_{\lambda \mu} (q,t) m_\mu ({\bf x}). \end{equation} The generating function \cite{Mac} \begin{equation} \Pi({\bf x}, {\bf y}, q, t) := \prod_{ij} {(t x_i y_j; q)_\infty \over (x_i y_j; q)_\infty} \label{Pi_xy_qt} \end{equation} where \begin{equation} (u; q)_\infty := \prod_{k=0}^\infty (1 - u q^k) \end{equation} is the (infinite) quantum Pochhammer symbol, has the following alternative expansions \cite[Sec.~VI.2]{Mac} in terms of products of symmetric functions in the pair of infinite sequences of determinate ${\bf x}= (x_1, x_2, \dots )$, ${\bf y } = (y_1, y_2, \dots )$, \begin{eqnarray} \Pi({\bf x}, {\bf y}, q, t)&\& = \sum_\lambda b_\lambda(q,t) P_\lambda ({\bf x}, q,t) P_{\lambda} ({\bf y}, q,t) \\ &\& = \sum_\lambda g_\lambda ({\bf x}, q,t) \, m_\lambda ({\bf y}) \\ &\& = \sum_\lambda m_\lambda ({\bf x}) \, g_\lambda ({\bf y}, q,t) , \end{eqnarray} where \begin{equation} b_\lambda(q,t) := (P_\lambda, P_\lambda)_{(q, t)}^{-1} \end{equation} and \begin{equation} g_\lambda ({\bf x}, q,t) := \prod_{i=1}^{\ell(\lambda)} g_{\lambda_i} ({\bf x}, q,t), \end{equation} where \begin{equation} g_j({\bf x}, q,t):= b_{(j)}(q,t) P_{(j)} ({\bf x}, q,t) = \sum_{\mu, \, |\mu|=j} z_\mu(q,t)^{-1} p_\mu({\bf x}). \end{equation} The basis $\{ g_\lambda({\bf x}, q, t)$ provides the $(q,t)$ analog of the elementary $\{e_\lambda\}$ and complete $\{h_\lambda\}$ symmetric function basis \cite{Mac}, interpolating between them in the case of Hall polynomials ($q=0$). \subsection{Quantum weight generating function} We now proceed as in \cite{GH1, GH2} to define parametric families within the center $\mathbf{Z}(\mathbf{C}[S_n])$ of the group algebra $\mathbf{C}[S_n]$ by identifying the indeterminates $(x_1, x_2, \dots )$ with a given set of constants $(c_1, c_2, \dots)$ and the $(y_1, y_2, \dots )$ with $z$ times the commuting Jucys-Murphy elements $\mathcal{J}:= (\mathcal{J}_1, \dots, \mathcal{J}_n)$ of $\mathbf{C}[S_n]$ \cite{Ju, Mu, DG}, defined as : \begin{equation} \mathcal{J}_b\coloneqq \sum_{a=1}^{b-1}(a\,b), \quad b=1, \dots, n, \quad n \in \mathbf{N}^+. \end{equation} We define the {\em quantum weight generating function} as \begin{equation} M(q,t, {\bf c}, z ) := \ \prod_{i=1}^\infty M(q,t, z c_i) = \sum_{j=0}^\infty g_j({\bf c}, q,t) z^j. \label{M_qtcz} \end{equation} where \begin{equation} M(q, t, z):= {(tz; q)_{\infty} \over (z; q)_{\infty}} = \prod_{k=0}^\infty {1 - t z q^k \over 1- z q^k }. \label{M_qtz} \end{equation} The Jucys-Murphy elements generate a commuting subalgebra of the group algebra $\mathbf{C}[S_n]$, and any symmetric polynomial in them is in the center $\mathbf{Z}(\mathbf{C}[S_n])$, The resulting central element $ M_n(q,t, {\bf c}, z\mathcal{J}) \in \mathbf{Z}(\mathbf{C}[S_n])$ is \begin{eqnarray} M_n(q,t, {\bf c}, z\mathcal{J}) &\&:= \prod_{a=1}^n M(q,t, {\bf c}, z\mathcal{J}_a) = \Pi({\bf c}, z\mathcal{J}, q, t) \label{Pihat} \\ &\& = \sum_\lambda z^{|\lambda |} g_\lambda ({\bf c}, q,t) m_\lambda (\mathcal{J}) \label{Pihat_combinatorial} \\ &\& = \sum_\lambda z^{|\lambda |}m_\lambda ({\bf c}) g _\lambda (\mathcal{J}, q,t). \label{Pihat_geometrical} \end{eqnarray} \subsection{Bases for $\mathbf{Z}(\mathbf{C}[S_n])$ and the eigenvalues of $M_n(q, t, {\bf c}, z\mathcal{J})$} Proceeding as in \cite{GH1, GH2, HO2}, we make use of two standard bases of $\mathbf{Z}(\mathbf{C}[S_n])$, both labelled by partitions of $n$. The first consists of the cycle-type sums $\{C_\mu\}$: \begin{equation} C_\mu = \sum_{h\in \cyc(\mu)} h, \end{equation} where $\cyc(\mu) \subset S_n$ denotes the conjugacy class consisting of elements whose cycle lengths are equal to the parts $\mu_i$ of the partition $\mu$. The second consists of the orthogonal idempotents $\{F_\lambda\}$, corresponding to the irreducible representations of $S_n$, labelled by partitions $\lambda$ of weight $|\lambda|=n$. These are linearly related to the cycle-type sums through the equivalent, under the characteristic map, of the Frobenius character formula \cite{Frob, FH} \begin{equation} F_\lambda = h_\lambda^{-1} \sum_{\mu, \, |\mu|=|\lambda|} \chi_\lambda(\mu) C_\mu, \quad C_\mu = z_\mu^{-1}\sum_{\lambda, \, \lambda| = |\mu|} \chi_\lambda(\mu)h_\lambda F_\lambda. \label{F_lambda_C_mu} \end{equation} Here $\chi_\lambda(\mu)$ is the character of the irreducible representation of Young symmetry type $\lambda$, evaluated on the class of cycle type $\mu$, and \begin{equation} h_\lambda := \det\left( {1 \over (\lambda_i - i +j)!}\right)^{-1} \end{equation} is the product of the hook lengths of the Young diagram corresponding to the partition $\lambda$. The elements $F_\lambda$ satisfy the orthogonality relations \begin{equation} F_\lambda F_\mu = F_\lambda \ \delta_{\lambda \mu}, \label{F_lambda_orthog} \end{equation} which imply that all elements of $\mathbf{Z}(\mathbf{C}[S_n])$ act diagonally under multiplication in this base. Eqs.~(\ref{F_lambda_C_mu}) and (\ref{F_lambda_orthog}) imply \begin{equation} C_\mu F_\lambda = {h_\lambda\chi_\lambda(\mu) \over z_\mu} F_\lambda, \label{C_mu_F_lambda} \end{equation} which means that the eigenvalue of $C_\mu$ on the basis element $F_\lambda$ is the {\it central character} \begin{equation} \phi_\lambda(\mu):={h_\lambda\chi_\lambda(\mu) \over z_\mu} . \label{central_char} \end{equation} It is a basic property \cite{Ju, Mu, DG} that the eigenvalues of any central element $G(\mathcal{J}) \in \mathbf{Z}(\mathbf{C}[S_n])$ formed from a symmetric function $G\in \Lambda$ by identifying the indeterminates with the Jucys-Mulrphy elements are given by evaluating $G$ on the content $\{j-i\}_{(i,j) \in \lambda}$ of the partition $\lambda$ \begin{equation} G(\mathcal{J})F_\lambda = G(\{j-i\}) F_\lambda, \quad (i,j) \in \lambda. \end{equation} In particular, if $G({\bf x})$ is formed from a product of the same expression in each of the indeterminates ${\bf x} = (x_1, x_2, \dots)$ \begin{equation} G({\bf x}) = \prod_{i} g(x_i), \end{equation} the eigenvalues of $G(\mathcal{J})$ are given by the content product formula \begin{equation} G(\mathcal{J})F_\lambda = \prod_{(i,j) \in \lambda} g(j-i)F_\lambda. \label{content_product_eigenvalue} \end{equation} It follows that the eigenvalues $r_\lambda^{M(q, t, {\bf c}, z)}$ of $M_n( q, t, {\bf c}, z\mathcal{J}) $ are given by the content product formula obtained from evaluation of the generating function $M(q,t,{\bf c}, z)$ at $\{z(j-i)\}$ \begin{eqnarray} M_n(q,t, {\bf c}, z\mathcal{J}) F_\lambda &\&= r_\lambda^{M(q,t, {\bf c}, z)}F_\lambda, \label{Pi_hat_r_lambda}\\ r_\lambda^{M(q, t, {\bf c}, z)} &\&=\prod_{(i, j)\in \lambda} M(q,t, {\bf c}, z(j-i)) =\prod_{(i,j)\in \lambda}\prod_{k=1}^\infty {(tz(j-i) c_k; q)_\infty \over (z(j-i)c_k; q)_\infty }. \label{r_lambda_q_p} \end{eqnarray} More generally, for an arbitrary integer $N \in \mathbf{Z}$, we define \begin{equation} r_\lambda^{M(q,t, {\bf c}, z)}(N) := r_0^{M(q,t, {\bf c}, z)}(N)\prod_{(i,j)\in \lambda} M(q,t, {\bf c}, z(N+j-i)), \label{r_lambda_G} \end{equation} where \begin{eqnarray} r^{M(q,t, {\bf c}, z)}_0(N) &\&:= \prod_{j=1}^{N-1} M(q,t, {\bf c}, (N-j)z), \quad r_0^{M(q,t, {\bf c}, z)}(0) := 1, \cr r^{M(q,t, {\bf c}, z)}_0(-N) &\&:= \prod_{j=1}^{N} M^{-1}(q,t, {\bf c}, (j-N)z), \quad N\geq 1, \label{content_product_G_N} \end{eqnarray} and hence \begin{equation} r_\lambda^{M(q,t, {\bf c}, z)} = r_\lambda^{M(q,t, {\bf c}, z)} (0) . \end{equation} \subsection{The 2D Toda $\tau$-function $\tau^{M(q,t, {\bf c}, z)}(N, {\bf t}, {\bf s})$ as generating function for double quantum Hurwitz numbers $F^d_{M(q,t, {\bf c})} (\mu, \nu)$, $H^d_{M(q,t, {\bf c})} (\mu, \nu)$} \label{double_quantum_hurwitz} The general theory \cite{Ta, Takeb, UTa, OrS, HO1} implies that the following diagonal double Schur function expansion \begin{equation} \tau^{M(q,t, {\bf c}, z)}(N, {\bf t}, {\bf s}) := \sum_{\lambda} r_\lambda^{M(q,t, {\bf c}, z)}(N) s_\lambda({\bf t}) s_\lambda({\bf s}), \label{tau_G_double_schur} \end{equation} defines a 2D Toda $\tau$-function of hypergeometric type, where \begin{equation} {\bf t} = (t_1, t_2, \dots), \quad {\bf s} = (s_1, s_2, \dots) \end{equation} are the 2D Toda flow variables, which may be identified in this notation in terms of the power sums \begin{equation} t_i = \frac{p_i}{i}, \quad s_i = \frac{p'_i}{i} \end{equation} in two independent sets of variables. We now apply the procedure developed in \cite{GH2} for deriving both the geometrical and combinatorial versions of weighted Hurwitz numbers associated to a weight generating function $G(z)$. Recall that the pure Hurwitz numbers $H(\mu^{1)}, \dots \mu^{(k)})$ may be viewed either as the number of $n$-sheeted branched coverings of the Riemann sphere having $k$ branch points with ramification profiles given by the partitions $\{\mu^{i)}\}_{i=1,\dots k}$ of lengths $|\mu^{(i)}| =n$, weighted by the inverse of the order of the automorphism group or, equivalently, as the number of ways in which the identity element in $S_n$ can be expressed as a product of elements belonging to the conjugacy classes $\{\cyc(\mu^{i)}\}$. A convenient way to express the latter is through the formula \begin{equation} H(\mu^{(1)}, \dots, \mu^{(k)}) = \frac{1}{n!} [\mathrm{Id}] \prod_{i=1}^k C_{\mu^{(i)}}, \label{combin_hurwitz_id} \end{equation} where $[\mathrm{Id}] $ means taking the component of the identity element within the cycle sum basis $\{C_\mu\}$ of $\mathcal{Z}(\mathbf{C}[S_n])$ or, more generally, \begin{equation} \prod_{i=1}^k C_{\mu^{(i)}} = \sum_{\nu, \, |\nu| = |\mu^{(i)}|} H(\mu^{(1)}, \dots , \mu^{(k)} ,\nu)\, z_\nu C_\nu, \label{combin_hurwitz} \end{equation} which is equivalent to the Frobenius-Schur formula (see \cite[Appendix~A]{LZ}) as shown in \cite[Sec. 5.2]{GH2} ) \begin{equation} H(\mu^{(1)}, \dots, \mu^{(k)}) =\sum_{\lambda} h_\lambda^{k-2} \prod_{i=1}^k \frac{\chi_\lambda(\mu^{(i)})}{z_{\mu^{(i)}}}, \label{Frob_Schur} \end{equation} Following \cite{GH2, HO2}, we now consider two notions of quantum weighted Hurwitz numbers associated to the generating function (\ref{M_qtcz}): combinatorial and geometrical. \subsubsection{Combinatorial quantum weighted Hurwitz numbers \cite{GH2}} \begin{definition}{\em Signature of paths \cite{GH2}.} For every $d$-step path in the Cayley graph of $S_n$ generated by transpositions, $(a,b)$, $a< b$, starting at the conjugacy class $\cyc(\nu)$ and ending at the class $\cyc(\mu)$, define its {\em signature} $\lambda$ as the partition of weight $|\lambda|=d$ whose parts are equal to the number of times a particular second element $b=1, \dots, n$ appears amongst the sequence of transpositions $(a_1b_1) \cdots (a_d b_d)$ forming the path from an element $h\in \cyc(\nu)$ to $(a_1 b_1) \cdots (a_d b_d)h \in \cyc(\mu)$. \end{definition} We recall the following Lemma from \cite[Lemma 2.3]{GH2} \begin{lemma} \label{generating_weighted_paths} Multiplication by $m_\lambda(\mathcal{J})$ defines an endomorphism of $\mathbf{Z}(\mathbf{C}[S_n])$ which, expressed in the $\{C_\mu\}$ basis, is given by \begin{equation} m_\lambda(\mathcal{J}) C_\mu = \sum_{\nu, \, \abs{\nu}=\abs{\mu}} m^\lambda_{\mu \nu} \frac{z_\nu}{\abs{\nu}!} C_\nu, \end{equation} where \begin{equation} \tilde{m}^\lambda_{\mu \nu}: = {|\lambda|! \over \prod_{i=1}^{\ell(|\lambda|} \lambda_i !}m^\lambda_{\mu \nu} \end{equation} is the total number of $\abs{\lambda}$-step paths in the Cayley graph of $S_n$ from $\cyc(\nu)$ to $\cyc(\mu)$ with signature $\lambda$. \end{lemma} Combining this with (\ref{Pihat_combinatorial}) gives \begin{equation} M_n(q,t, {\bf c}, z\mathcal{J}) \, C_\mu = \sum_{d=0}^\infty z^d\sum_{\nu, |\nu|=|\mu|=n} F^d_{M(q,t,{\bf c})} (\mu, \nu) z_\nu C_\nu, \label{Pi_Cmu_F_Gd} \end{equation} where \begin{equation} F^d_{M(q,t,{\bf c})}(\mu, \nu) \coloneqq {1 \over \abs{n}!} \sum_{\lambda, \ \abs{\lambda} =d} g_\lambda({\bf c},q,t) m^\lambda_{\mu \nu} \label{Fd_Mqt_def} \end{equation} is the quantum weighted combinatorial Hurwitz number for such paths, and \begin{equation} M(q,t,{\bf c}) := M(q,t,{\bf c}, z)\vert_{z=1}. \end{equation} (Note that, whereas the infinite product (\ref{M_qtcz}) defining the generating function $M(q,t, {\bf c}, z)$ need not necessarily represent a convergent power series in $z$, if $|q|<1$ and $|c_i|<1$ for all $i$, these are i, in fact, convergent for all values of $z$.) Combining this with the results of the previous section leads to our first main result: the 2D Toda $\tau$-function defined in (\ref{tau_G_double_schur}) for $N=0$ is the generating function for the quantum weighted combinatorial Hurwitz number (\ref{Fd_Mqt_def}). \begin{theorem} \label{combinatorial_hurwitz} Expanding $ \tau^{M(q,t, {\bf c}, z)}({\bf t}, {\bf s}) :=\tau^{M(q,t, {\bf c})}(0, {\bf t}, {\bf s})$ in the basis consisting of products of power sum symmetric functions, the coefficients are the combinatorial quantum Hurwitz numbers (\ref{Fd_Mqt_def}). \begin{equation} \tau^{M(q,t, {\bf c}, z)} ({\bf t}, {\bf s}) = \sum_{d=0}^\infty \sum_{\substack{\mu, \nu \\ \abs{\mu} = \abs{\nu}}} z^d F^d_{M(q,t,{\bf c})}(\mu, \nu) p_\mu({\bf t}) p_\nu({\bf s}) \label{tau_Mqt_F} \end{equation} \end{theorem} \begin{proof} Combining (\ref{Pi_Cmu_F_Gd}) with (\ref{F_lambda_C_mu}), and using eq.~(\ref{Pi_hat_r_lambda}), gives \begin{equation} \sum_{d=0}^\infty z^d \sum_{\nu, |\nu|=|\mu|} F^d_{M(q,t,{\bf c})}(\mu, \nu) \chi_\lambda(\nu) = { \chi_\lambda(\mu) \over z_\mu} r_\lambda^{M(q,t, {\bf c}, z)}(N) \label{r_lambda_chi_lambda} \end{equation} Substituting the Frobenius character formula \begin{equation} s_\lambda({\bf t})= \sum_{\mu, \, |\mu| =|\lambda|}z_\mu^{-1} \chi_\lambda(\mu) p_\mu({\bf t} ) , \quad s_\lambda({\bf s} )= \sum_{\nu, \, |\mu| =|\lambda|}z_\nu^{-1} \chi_\lambda(\nu) p_\nu({\bf s} ) , \end{equation} and (\ref{r_lambda_chi_lambda}) into (\ref{tau_G_double_schur}), for $N=0$, and using the orthogonally of characters, we obtain (\ref{tau_Mqt_F}). \end{proof} \subsubsection{Enumerative geometrical quantum weighted Hurwitz numbers} We recall the two types of weighting factors appearing in the definition of quantum Hurwitz numbers in ref.~\cite{GH2}. \begin{eqnarray} W_{E(q)} (\mu^{(1)}, \dots, \mu^{(k)}) &\& \coloneqq {1\over \abs{\aut(\lambda)}}\sum_{\sigma\in S_k} \sum_{0 \le i_1 < \cdots < i_k}^\infty q^{i_1 \ell^*(\mu^{(\sigma(1))})} \cdots q^{i_k \ell^*(\mu^{(\sigma(k))})} \cr &\&={1\over \abs{\aut(\lambda)}}\sum_{\sigma\in S_k} \frac{q^{(k-1) \ell^*(\mu^{(\sigma(1))})} \cdots q^{\ell^*(\mu^{(\sigma(k-1))})}}{ (1- q^{\ell^*(\mu^{(\sigma(1))})}) \cdots (1- q^{\ell^*(\mu^{(\sigma(1))})} \cdots q^{\ell^*(\mu^{(\sigma(k))})})}, \cr &\& \label{W_E_q} \end{eqnarray} \begin{eqnarray} W_{H(q)} (\nu^{(1)}, \dots, \nu^{(\tilde{k})}) &\& \coloneqq {(-1)^{\ell^*(\tilde{\lambda})}\over \abs{\aut(\tilde{\lambda})}} \sum_{\sigma\in S_{\tilde{k}}} \sum_{0 \le i_1 \le \cdots \le i_{\tilde{k}}}^\infty q^{i_1 \ell^*(\nu^{(\sigma(1))})} \cdots q^{i_k \ell^*(\nu^{(\sigma({\tilde{k}}))})} \cr &\&= {(-1)^{\ell^*(\tilde{\lambda})}\over \abs{\aut(\tilde{\lambda})}}\sum_{\sigma\in S_{\tilde{k}}} \frac{1}{ (1- q^{\ell^*(\nu^{(\sigma(1))})}) \cdots (1- q^{\ell^*(\nu^{(\sigma(1))})} \cdots q^{\ell^*(\nu^{(\sigma({\tilde{k}}))})})}, \cr &\& \label{W_H_q} \end{eqnarray} where $\lambda$ is the partition with parts $(\ell^*(\mu^{(1)}), \dots, \ell^*(\mu^{(k)}))$, $\tilde{\lambda}$ the one with parts $(\ell^*(\nu^{(1)}), \dots, \ell^*(\nu^{({\tilde{k}})}))$, and $\abs{\aut(\lambda)}$ is the order of the automorphism group of $\lambda$ : \begin{equation} \abs{\aut(\lambda)}= \prod_{i=1}^{\ell(\lambda)} m_i(\lambda)!. \end{equation} Denote the product of these \begin{equation} W_q (\mu^{(1)}, \dots, \mu^{(k)}; \nu^{(1)}, \dots, \nu^{({\tilde{k}})}) := W_{E(q)} (\mu^{(1)}, \dots, \mu^{(k)})W_{H(q)} (\nu^{(1)}, \dots, \nu^{({\tilde{k}})}) . \end{equation} Recall also the definition of the Pochhammer symbol $(u)_\lambda $ associated with a partition $\lambda$ \begin{equation} (u)_\lambda := \prod_{i=1}^{\ell(\mu)}\prod_{j=1}^{\lambda_i}(u+j-i) \end{equation} and the following Lemma (cf.~\cite{OrS}), which follows from the Frobenius character formula. \begin{lemma} \label{Poch_Frob} The Pochhammer symbol may be expressed as \begin{equation} (u)_\lambda =s_\lambda({\bf t}(u)) h_\lambda = \left(1+ h_\lambda\sideset{}{'} \sum_{\mu, \, \abs{\mu}=\abs{\lambda}}\frac{\chi_\lambda(\mu)}{z_\mu} u^{-\ell^*(\mu)} \right) \label{pochhammer_frobenius} \end{equation} where \begin{equation} {\bf t}(u) := (u, \frac{u}{2}, \frac{u}{3}, \dots ), \label{t_u} \end{equation} and $\sum'_{\mu, |\mu |=|\lambda|)}$ denotes the sum over all partitions other than the cycle type of the identity element $(1)^{\abs{\lambda}}$. \end{lemma} It is useful to know how any given symmetric combination of the Jucys-Murphy elements may be represented in the basis of cycle-type sums (see e.g. \cite{Las}). The following result shows how to do this for the symmetric functions $g_j(\mathcal{J}, q, t)$. \begin{theorem} \label{gj_cycle_sums} \begin{equation} g_j(\mathcal{J}, q, t) =\sum_{e=0}^j t^e \sum_{k, \, \tilde{k}=0}^{e, \, j-e} {\hskip-30 pt} \sideset{}{'}\sum_{\substack{\{\mu^{(u)}, \nu^{(v)} \}\\ 1\le u\le k, 1\le v \le {\tilde{k}} \\ |\mu^{(u)}|=| \nu^{(v})|=n \\ \sum_{u=1}^k \ell^*(\mu^{(u})=e \\ \sum_{u=1}^k \ell^*(\mu^{(u)})+ \sum_{v=1}^{\tilde{k}} \ell^*(\nu^{(v)} )=j }} {\hskip-40 pt} W_q(\mu^{(1)}, \dots, \mu^{(k)}; \nu^{(1)}, \dots, \nu^{({\tilde{k}})}) \prod_{u=1}^k C_{\mu^{(u)}} \prod_{v=1}^{\tilde{k}} C_{\nu^{(v)}} \label{gj_cycle_expansion} \end{equation} where, by (\ref{combin_hurwitz}), \begin{equation} \prod_{u=1}^k C_{\mu^{(u)}} \prod_{v=1}^{\tilde{k}} C_{\nu^{(v)}} =\sum_{\nu, \, |\nu|=n} H(\mu^{(1)}, \dots , \mu^{(k)}, \nu^{(1)}, \dots , \nu^{({\tilde{k}})}, \nu) \, z_\nu C_\nu \end{equation} \end{theorem} \begin{remark}\small \rm Note that the sums appearing in (\ref{gj_cycle_expansion}) are all finite because of the fact that the partitions corresponding to the identity class of $S_n$ are excluded and the constraints \begin{equation} \sum_{u=1}^k \ell^*(\mu^{(u})=e, \quad \sum_{u=1}^k \ell^*(\mu^{(u)})+ \sum_{v=1}^{\tilde{k}} \ell^*(\nu^{(v)} )=j \end{equation} imply the number of partitions $k + \tilde{k}$ is finite. \end{remark} \begin{proof} We start with the expansion \begin{equation} \prod_{a=1}^n \prod_{k=0}^\infty {1- t z \mathcal{J}_a q^k \over 1 - z \mathcal{J}_aq^k} =\sum_{j=0}^\infty g_j(\mathcal{J}, q, t) z^j \label{Pi_z_JJ} \end{equation} Applying the LHS of (\ref{Pi_z_JJ}) to $F_\lambda$ and using (\ref{content_product_eigenvalue}) gives \begin{eqnarray} \prod_{a=1}^n \prod_{k=0}^\infty {1- t z \mathcal{J}_a q^k \over 1 - z \mathcal{J}_a q^k} F_\lambda &\& = \prod_{(i,j)\in \lambda} \prod_{k=0}^\infty {1 - tz (j-i) q^k \over 1 - z (j-i) q^k} F_\lambda = \prod_{k=0}^\infty {(-{1\over tz q^k})_\lambda \over (-{1\over z q^k})_\lambda}\, F_\lambda \\ &\& {} \cr &\&= \prod_{k=0}^\infty {1+ h_\lambda\sideset{}{'} \sum_{\mu, \, \abs{\mu} =\abs{\lambda}} \frac{\chi_\lambda(\mu)}{z_\mu} (-tzq^k)^{\ell^*(\mu)} \over 1+ h_\lambda\sideset{}{'} \sum_{\nu, \, \abs{\nu}=\abs{\lambda}}\frac{\chi_\lambda(\nu)}{z_\nu} (-z q^k)^{\ell^*(\nu)} }\, F_\lambda, \label{Pi_z_eigenvalue} \end{eqnarray} where Lemma \ref{Poch_Frob} has been used in both the numerator and denominator of (\ref{Pi_z_eigenvalue}). From the relation (\ref{C_mu_F_lambda}) and the fact that $\{F_\lambda\}$ is a basis for the center $\mathbf{Z}(\mathbf{C}[S_n])$, eq.~(\ref{Pi_z_eigenvalue}), together with (\ref{Pi_z_JJ}), is equivalent to the identity \begin{equation} \sum_{j=0}^\infty g_j(\mathcal{J}, q, t) z^j = \prod_{k=0}^\infty {1 + \sum C_\mu (-tzq^k)^{\ell^*(\mu)} \over 1 + \sum C_\nu (-zq^k)^{\ell^*(\nu)}}. \label{ratio_cycle_sum_id} \end{equation} Expanding (\ref{ratio_cycle_sum_id}) as a power series in $z$ and $t$, and summing the resulting geometric series expansions in $q$, as detailed in \cite{GH2}, to obtain (\ref{W_E_q}) and (\ref{W_H_q}) gives the result (\ref{gj_cycle_expansion}). \end{proof} Now let $\{\{\mu^{i, u_i}\}_{u_i =1, \dots , k_i}, \{\nu^{i, v_i}\}_{v_ i= 1, \dots, \tilde{k}_i}, \mu, \nu \}_{i=1, \dots, l}$ denote the branching profiles of an $n$-sheeted covering of the Riemann sphere with two specified branch points of ramification profile types $(\mu, \nu)$, at $(0, \infty)$, and the rest divided into two classes I and II, denoted $\{\mu^{(i,u_i)}\}_{u_i =1, \dots , k_i}$ and $\{\nu^{(i, v_ i)}\}_{v_ i= 1, \dots, \tilde{k}_i}$, respectively. These are further subdivided into $l $ species, or ``colours'', labelled by $i=1, \dots l$, the elements within each colour group distinguished by the labels $(u_i =1, \dots , k_i)$ and $(v_ i =1, \dots , \tilde{k}_ i)$. To such a grouping, we assign a partition $\lambda$ of length \begin{equation} \ell(\lambda) = l \end{equation} and weight \begin{equation} d: = |\lambda| = \sum_{i =1}^l \left (\sum_{u_i =1}^{k_i}\ell^*(\mu^{(i, u_i)}) + \sum_{v_i =1}^{\tilde{k}_i}\ell^*(\nu^{(i, v_i)}) \right) = \sum_{i=1}^l d_i, \end{equation} whose parts $(\lambda_1\ge \cdots \ge \lambda_l > 0)$ are equal the total colengths \begin{equation} d_i := \sum_{u_i =1}^{k_i} \ell^*(\mu^{(i, u_i)}) + \sum_{v_i=1}^{\tilde{k}_i}\ell^*(\nu^{(i, v_i)}), \quad i=1. \dots, l \end{equation} in weakly decreasing order. By the Riemann-Hurwitz formula, the genus $g$ of the covering curve is given by \begin{equation} 2-2g = \ell(\mu) +\ell(\nu) - d. \end{equation} We now assign a weight $W_q (\{\mu^{(i,u_i)}, \nu^{( i, v_ i)}\}, {\bf c} ) $ to each such covering, consisting of the product of all the weights $W_{E(q)}(\{\mu^{(i, u_i)}\}_{u_i = 1, \dots, k_i})$, $W_{H(q)}(\{\nu^{( i, v_ i)}\}_{v_ i = 1, \dots, \, \tilde{k}_ i})$ for the subsets of different colour and class and the weight $m_\lambda({\bf c})$ given by the monomial symmetric functions evaluated at the parameters ${\bf c}$ \begin{eqnarray} W_q (\{\mu^{(i,u_i)}, \nu^{( i, v_ i)}\}, {\bf c} ) &\&:= W_q (\{\mu^{(i,u_i)}, \nu^{( i, v_ i)}\} ) m_\lambda({\bf c})\\ W_q (\{\mu^{(i,u_i)}, \nu^{( i, v_ i)}\} ) &\&:= \prod_{i=1}^lW_{E(q)}(\{\mu^{(i,u_i)}\}_{u_i = 1, \dots, \, k_i}) W_{H(q)}(\{\nu^{( i, v_ i)}\}_{ i = 1, \dots, \, \tilde{k}_ i}) \end{eqnarray} Using these weights, for every pair $(d,e)$ of non-negative integers and $(\mu, \nu)$ of partitions of $n$, we define the geometrical quantum weighted Hurwitz numbers $H^{(d,e)}_{({\bf c}, q)}(\mu, \nu) $ as the sum \begin{equation} H^{(d,e)}_{({\bf c}, q)}(\mu, \nu) := z_\nu \sum_{l=0}^{d} {\hskip -35 pt} \sideset{}{'} \sum_{\substack{\{\mu^{(i, u_i)}, \nu^{(i, v_i)}\} , \ k_i\ge 1, \ \tilde{k}_i \ge 1\\ \sum_{i=1}^l \sum_{u_i =1}^{k_i}\ell^*(\mu^{(i,u_i)}) = e, \\ \sum_{i=1}^l\left( \sum_{u_i =1}^{k_i}\ell^*(\mu^{(i, u_i)} ) + \sum_{v_ i =1}^{\tilde{k}_ i}\ell^*(\nu^{( i, v_ i)})\right) =d}} {\hskip - 20 pt} {\hskip-50 pt}W_q (\{\mu^{(i, u_i)}, \nu^{( i, v_ i})\}, {\bf c}) \ H(\{\mu^{(i, u_i)}\}_{\substack{u_i = 1, \dots, k_i \\ i =1, \dots , l}} , \{\nu^{( i, v_ i)}\}_{\substack{v_ i = 1, \dots, \tilde{k}_ i \\ i =1, \dots , l}}, \mu, \nu). \label{Hde_c_q_mu_nu} \end{equation} \begin{theorem} \label{geometric_hurwitz} The combinatorial Hurwitz numbers $F^d_{M(q,t,{\bf c})} (\mu, \nu) $ are polynomials in $t$ of degree $d$ whose coefficients are equal to the geometrical quantum weighted Hurwitz numbers $H^{(d,e)}_{({\bf c}, q)}(\mu, \nu)$ \begin{equation} F^d_{M(q,t,{\bf c})} (\mu, \nu) = \sum_{e=0}^d H^{(d,e)}_{({\bf c}, q)}(\mu, \nu) t^e. \label{Fd_G_Hde_G} \end{equation} Hence $\tau^{M(q,t,{\bf c}, z)} ({\bf t}, {\bf s})$, when expanded in the basis of products of power sum symmetric functions and power series in $z$ and $t$ is the generating function for the $H^{(d,e)}_{({\bf c}, q)}(\mu, \nu)$'s: \begin{equation} \tau^{M(q,t,{\bf c}, z)} ({\bf t}, {\bf s}) = \sum_{d=0}^\infty \sum_{e=0}^d z^d t^e H^{(d,e)}_{({\bf c}, q)}(\mu, \nu) p_\mu({\bf t}) p_\nu({\bf s}). \label{tau_G_H} \end{equation} \end{theorem} \begin{proof} Substitution of (\ref{gj_cycle_expansion}) into \begin{equation} g_\lambda (\mathcal{J}, q, t) = \prod_{i=1}^{\ell(\lambda)}g_{\lambda_i}(\mathcal{J}, q, t) \end{equation} gives \begin{equation} g_\lambda (\mathcal{J}, q, t) =\sum_{e=0}^{|\lambda|} t^e {\hskip -30 pt} \sideset{}{'}\sum_{\substack{\{\mu^{(i, u_i)}, \nu^{( i, v_ i)}\} \\ \sum_{i=1}^{\ell(\lambda)} \sum_{u_i =1}^{k_i}\ell^*(\mu^{(i, u_i)}) = e, \\ \sum_{u_i =1}^{k_i}\ell^*(\mu^{(i, u_i)}) + \sum_{v_i =1}^{\tilde{k}_i}\ell^*(\nu^{(i, v_i)}) = \lambda_{i}}} {\hskip - 40 pt} W_q (\{\mu^{(i, u_i)}, \nu^{( i, v_ i)}\} ) \prod_{i=1}^{\ell(\lambda)} \left(\prod_{u_i=1}^{k_i} C_{\mu^{(i, u_i)}}\prod_{v_ i=1}^{\tilde{k}_ i }C_{\nu^{( i, v_ i)}} \right) \end{equation} Combining this with (\ref{Pihat_geometrical}) gives \begin{equation} M_n(q,t, {\bf c}, z\mathcal{J}) C_\mu = \sum_{d=0}^\infty \sum_{e=0}^d z^d t^e \sum_{\substack{\nu\\ |\nu|=|\mu| = n}} H^{(d,e)}_{({\bf c}, q)} (\mu, \nu) C_\nu, \label{Pi_Cmu_H_Gd} \end{equation} where $H^{(d,e)}_{({\bf c}, q)} (\mu, \nu)$ is defined by (\ref{Hde_c_q_mu_nu}). Comparing with (\ref{Pi_Cmu_F_Gd}) gives the result (\ref{Fd_G_Hde_G}), and hence (\ref{tau_G_H}). \end{proof} \section{Specializations, limits and examples} \label{examples} By making specific choices for the parameters $\{(c_1, c_2, \dots ), q, t\}$ defining the weight generating function $M(q,t, {\bf c}, z)$, specialized versions of the above quantum weighted Hurwitz numbers result. Taking the limits $(z,t) \rightarrow (0, \infty)$, with $t z$ fixed gives the quantum deformation of the path weighting by elementary symmetric functions considered in \cite{GH2}. The limit $t\rightarrow 0$ gives the dual case, weighted by the quantum deformation of the path weighting by complete symmetric functions. Other specializations involving only particular values for the pair $(q,t)$ or their limits reduce the Macdonald polynomials either to Schur polynomials ($q=t)$, or Hall-Littlewood polynomials ($q=0$) or Jack polynomials ($q=t^\alpha, \ t\rightarrow 1$). In this way we can recover all previously studied versions of weighted Hurwitz numbers, as well as several new examples of interest. \subsection{Classically weighted Hurwitz numbers $(q=t)$} By setting $t=q$ in (\ref{Pihat}) we recover the case of Schur functions and the general classically weighted families of Hurwitz numbers studied in \cite{GH2}. \subsection{The case $c_i= -\delta_{i,1}$ (quantum monotonic paths) } This gives the quantum deformation of the classical case (corresponding to $q=0$) when the weight generating function is ${1+w \over 1-z}$, with $w = -tz$. If $w=0$, the latter becomes the signed counting problem for branched covers with fixed genus or, equivalently, weakly monotonic paths in the Cayley graph generated by transpositions \cite{GH1, GH2}. When $z=0$ it gives the Hurwitz numbers for Belyi curves (having three branch points, with two of them fixed) of fixed genus or, equivalently, strongly monotonic paths in the Cayley graph generated by transpositions \cite{GH1, GH2}. When $q\neq 0,\, t=1$, this is the particular case of the multispecies quantum Hurwitz numbers $F^d_{Q(q,q)}(\mu, \nu)= H^d_{Q(q,q)}(\mu, \nu)$ developed in detail in \cite{GH2}, when there are only two species involved, one of the first class, the other, of second. \subsection{Elementary quantum weighting ($(z,t) \rightarrow (0, \infty)$, $-t z $ fixed ($\rightarrow z$) } \label{E_c_q} For this case, the weight generating function is \begin{equation} E(q, {\bf c}, z) := \prod_{k=0}^\infty \prod_{i=1}^\infty(1 +zq^k c_i) = \prod_{i=1}^\infty (-zc_i; q)_{\infty} =: \sum_{j=0}^\infty e_j(q,{\bf c}) z^j, \end{equation} where $e_j(q, {\bf c})$ is the quantum deformation of the elementary symmetric function $e_j({\bf c})$ (the classical limit being $q \rightarrow 0$). Setting $c_i = \delta_{i1}$ reproduces the generating function functions for the special quantum weighted Hurwitz numbers denoted $H^d_{E(q)}(\mu, \nu) = F^d_{E(q)}(\mu, \nu)$ that were studied in \cite{GH2}. In the general case, the corresponding element of the center of the group algebra is: \begin{equation} E_n(q, {\bf c}, z\mathcal{J}) := \prod_{a=1}^n E(q, {\bf c}, z\mathcal{J}_a) =\sum_{\lambda} z^{|\lambda|} e_\lambda(q, {\bf c}) m_\lambda(\mathcal{J}) = \sum_{\lambda} z^{|\lambda|} m_\lambda(\mathcal{J}) e_\lambda(q, {\bf c}) \end{equation} where \begin{equation} e_\lambda(q, {\bf c}) := \prod_{i=1}^{\ell(\lambda)} e_{\lambda_i} (q,{\bf c}). \end{equation} Applying $E_n(q, {\bf c}, z\mathcal{J}) \in\mathbf{Z}(\mathbf{C}[S_n])$ to the orthogonal idempotents $\{F_\lambda\}$ and the cycle-type sums $\{C_\mu\}$, it follows that the corresponding hypergeometric $2D$ Toda $\tau$-function is \begin{eqnarray} \tau^{E(q, {\bf c}, z)}({\bf t}, {\bf s}) &\&= \sum_\lambda r_\lambda^{E(q,{\bf c}, z)} s_\lambda({\bf t}) s_\lambda({\bf s}) \\ &\&= \sum_{d=0}^\infty z^d\sum_\lambda F^d_{E(q, {\bf c})}(\mu, \nu) p_\mu({\bf t}) p_\nu({\bf s}), \end{eqnarray} where the content product coefficient $r_\lambda^{E(q,{\bf c}, z)}$ is \begin{equation} r_\lambda^{E(q, {\bf c}, z)} := \prod_{(ij) \in \lambda} \prod_{k=0}^\infty (-z(j-i)c_k; q)_\infty \end{equation} and \begin{equation} F^d_{E(q, {\bf c})}(\mu, \nu) := \sum_{|\lambda|=d}e_\lambda(q, {\bf c}) m^\lambda_{\mu \nu} \label{F_dE_qc} \end{equation} is the weighted number of paths in the Cayley graph of $S_n$ generated by transpositions, starting at the conjugacy class $\cyc(\mu)$ and ending at $\cyc(\nu)$, with the weight $e_\lambda(q, {\bf c})$ for a path of signature $\lambda$. Now consider $n$-fold branched coverings of $\mathbf{C} \mathbf{P}^1$ with a fixed pair of branch points at $(0, \infty)$ with ramification profiles $(\mu, \nu)$ and a further $ \sum_{i=1}^l k_i $ branch points $\{\mu^{(i,u_i)}\}_{u_i = 1, \dots, \, k_i}$ of $l$ different species (or ``colours''), labelled by $i=1, \dots , l$, with non trivial ramification profiles. The weight $W_{E^l(q)} (\{\mu^{(i,u_i)}\}_{\substack{u_i = 1, \dots, k_i \\ i=1, \dots , l}}, {\bf c} )$ for such a covering consists of the product of all the weights $W_{E(q)}(\{\mu^{(i, u_i)}\}_{u_i = 1, \dots, k_i})$, for the subsets of different colour with the weight $m_\lambda({\bf c})$ given by the monomial symmetric functions evaluated at the parameters ${\bf c}$ \begin{eqnarray} W_{E^l(q)} (\{\mu^{(i,u_i)}\}_{\substack{u_i = 1, \dots, k_i \\ i=1, \dots , l}}, {\bf c} ) &\&:= W_{E^l(q)} (\{\mu^{(i,u_i)}\}_{\substack{u_i = 1, \dots, k_i \\ i=1, \dots , l}} ) \, m_\lambda({\bf c})\\ W_{E^l(q)}(\{\mu^{(i,u_i)}\}_{\substack{u_i = 1, \dots, k_i \\ i=1, \dots , l}} ) &\&:= \prod_{i=1}^lW_{E(q)}(\{\mu^{(i,u_i)}\}_{u_i = 1, \dots, \, k_i}). \end{eqnarray} We then have \begin{equation} F^d_{E(q,{\bf c})}(\mu, \nu) =H^d_{E(q, {\bf c})} (\mu, \nu), \end{equation} where \begin{equation} H^d_{E({\bf c}, q)}(\mu, \nu) := z_\nu \sum_{l=0}^{d} {\hskip -10pt} \sideset{}{'} \sum_{\substack{\{\mu^{(i, u_i)}\} , \ k_i\ge 1, \ \\ \sum_{i=1}^l \sum_{u_i =1}^{k_i}\ell^*(\mu^{(i,u_i)}) = d }} {\hskip-20 pt}W_{E^l(q)} (\{\mu^{(i, u_i)}\}_{\substack{u_i = 1, \dots, k_i \\ i=1, \dots , l}}, {\bf c}) \ H(\{\mu^{(i, u_i)}\}_{\substack{u_i = 1, \dots, k_i \\ i =1, \dots , l}} , \mu, \nu) \label{Eq_d_c} \end{equation} is the geometrical elementary quantum weighted Hurwitz number. \subsection{Complete quantum weighting ($t=0$)} \label{H_c_q} This is the dual of the preceding case, with weight generating function \begin{equation} H(q, {\bf c}, z) := \prod_{k=0}^\infty \prod_{i=1}^\infty(1 - zq^k c_i)^{-1} = \prod_{i=1}^\infty (zc_i; q)^{-1}_{\infty} =: \sum_{j=0}^\infty h_j(q,{\bf c}) z^j, \end{equation} where $h_j(q, {\bf c})$ is the quantum deformation of the complete symmetric function $h_j({\bf c})$. Setting $c_i = \delta_{i1}$ reproduces the generating function functions for the quantum weighted Hurwitz numbers $H^d_{H(q)} (\mu, \nu)= F^d_{H(q)}(\mu, \nu)$ studied in \cite{GH2}. The corresponding element of the center of the group algebra in the general case is: \begin{eqnarray} H_n(q, {\bf c}, z\mathcal{J}) := \prod_{a=1}^n H(q, {\bf c}, \mathcal{J}_a) \ =\sum_{\lambda}z^{|\lambda|} h_\lambda(q, {\bf c} )m_\lambda(\mathcal{J}) = \sum_{\lambda} z^{|\lambda|} m_\lambda({\bf c}) h_\lambda(q, \mathcal{J}) , \end{eqnarray} where \begin{equation} h_\lambda(q, {\bf c}) := \prod_{i=1}^{\ell(\lambda)} h_{\lambda_i} (q, {\bf c}). \end{equation} The hypergeometric $2D$ Toda $\tau$-function for this case is \begin{eqnarray} \tau^{H(q,{\bf c}, z)}({\bf t}, {\bf s}) &\&= \sum_\lambda r_\lambda^{H(q, {\bf c}, z)} s_\lambda({\bf t}) s_\lambda({\bf s}) \\ &\&= \sum_{d=0}^\infty z^d\sum_\lambda F^d_{H(q,{\bf c})}(\mu, \nu) p_\mu({\bf t}) p_\nu({\bf s}), \end{eqnarray} where \begin{equation} r_\lambda^{H(q,{\bf c}, z)} := \prod_{(ij) \in \lambda} \prod_{k=0}^\infty (z(j-i)c_k; q)^{-1}_\infty \end{equation} and \begin{equation} F^d_{H(q,{\bf c})}(\mu, \nu) := \sum_{|\lambda|=d}h_\lambda(q, {\bf c}) m^\lambda_{\mu \nu} \label{F_dH_qc} \end{equation} is the weighted number of paths in the Cayley graph of $S_n$ generated by transpositions, starting at the conjugacy class $\cyc(\mu)$ and ending at $\cyc(\nu)$, with weight $h_\lambda(q, {\bf c})$ for a path of signature $\lambda$. Consider again $n$-fold branched coverings of $\mathbf{C} \mathbf{P}^1$, with a fixed pair of branch points at $(0, \infty)$ with ramification profiles $(\mu, \nu)$ and a further $ \sum_{i=1}^l \tilde{k}_i $ branch points $\{\nu^{(i,v_i)}\}_{v_i = 1, \dots, \, \tilde{k}_i}$ again, of $l$ different species (or ``colours''), labelled by $i=1, \dots , l$, with nontrivial ramification profiles. Like in the preceding case, the weight $W_{H^l(q)} (\{\nu^{(i,v_i)}\}_{\substack{v_i = 1, \dots, \tilde{k}_i \\ i=1, \dots , l}}, {\bf c} )$ for such a covering consists now of the product of all weights $W_{H(q)}(\{\nu^{(i, v_i)}\}_{v_i = 1, \dots, \tilde{k}_i})$, for the subsets of different colour with the weight $m_\lambda({\bf c})$ again given by the monomial symmetric functions evaluated at the parameters ${\bf c}$ \begin{eqnarray} W_{H^l(q)} (\{\nu^{(i,v_i)}\}_{\substack{v_i = 1, \dots, \tilde{k}_i \\ i=1, \dots , l}}, {\bf c} ) &\&:= W_{H^l(q)} (\{\nu^{(i,v_i)}\}_{\substack{v_i = 1, \dots, \tilde{k}_i \\ i=1, \dots , l}} ) \, m_\lambda({\bf c})\\ W_{H^l(q)}(\{\nu^{(i,v_i)}\}_{\substack{v_i = 1, \dots, \tilde{k}_i \\ i=1, \dots , l}} ) &\&:= \prod_{i=1}^lW_{H(q)}(\{\nu^{(i,v_i)}\}_{v_i = 1, \dots, \, \tilde{k}_i}). \end{eqnarray} We again have the equality \begin{equation} F^d_{H(q,{\bf c})}(\mu, \nu) =H^d_{H(q, {\bf c})} (\mu, \nu), \end{equation} where \begin{equation} H^d_{H({\bf c}, q)}(\mu, \nu) := z_\nu \sum_{l=0}^{d} {\hskip -10pt} \sideset{}{'} \sum_{\substack{\{\nu^{(i, v_i)}\} , \ \tilde{k}_i\ge 1, \ \\ \sum_{i=1}^l \sum_{v_i =1}^{\tilde{k}_i}\ell^*(\nu^{(i,v_i)}) = d }} {\hskip-20 pt}W_{H^l(q)} (\{\nu^{(i, v_i)}\}_{\substack{v_i = 1, \dots, \tilde{k}_i \\ i=1, \dots , l}}, {\bf c}) \ H(\{\nu^{(i, v_i)}\}_{\substack{v_i = 1, \dots, \tilde{k}_i \\ i =1, \dots , l}} , \mu, \nu) \label{Eq_d_c} \end{equation} is the corresponding geometrically defined complete quantum weighted Hurwitz number. \subsection{Hall-Littlewood polynomials ($q =0$) } Setting $q=0$ in eq.~(\ref{Pi_xy_qt}), the generating function reduces to the one for Hall-Littlewood polynomials \cite[Sec. III.2]{Mac} $P_\lambda({\bf x}, t)$, which satisfy the orthogonality relations \begin{equation} (P_\lambda, P_\mu)_t= \delta_{\lambda \mu} (b_\lambda(t))^{-1}, \quad b_\lambda (t) := \prod_{i\ge 1} \prod_{k=1}^{m_i(\lambda)} (1 - t^k) \end{equation} with respect to the scalar product $(\ , \ )_t$ defined by \begin{equation} (p_\lambda, p_\mu)_t = \delta_{\lambda \mu} z_\lambda n_\lambda (t), \quad n_\lambda:= \prod_{i=1}^{\ell(\lambda)} {1\over 1 - t^{\lambda_i}}. \end{equation} Following \cite{Mac}, we define \begin{equation} q_\lambda({\bf x}, t) := b_\lambda(t) \prod_{i=1}^{\ell(\lambda)} P_j({\bf x}, t) \label{q_lambda} \end{equation} and obtain the following expansion \begin{equation} L(t, {\bf x}, {\bf y}):= \prod_{i, j}^\infty {1 - t x_i y_j \over 1- x_i y_j} = \sum_{\lambda}^\infty q_\lambda({\bf x}, t) m_\lambda({\bf y}) = \sum_{\lambda}^\infty q_\lambda({\bf y}, t) m_\lambda({\bf x}). \label{Lt_yx_t} \end{equation} Substituting ${\bf c}=(c_1, c_2, \dots )$ for ${\bf x}$, and $(\mathcal{J}_1, \dots , \mathcal{J}_n)$ for ${\bf y}$, we have \begin{equation} L(t, {\bf c}, z\mathcal{J}):= \prod_{i=1}^\infty \prod_{a=1}^n {1 - t c_i z \mathcal{J}_a \over 1- c_i z \mathcal{J}_a} = \sum_{\lambda}^\infty z^{|\lambda|}q_\lambda({\bf c}, t) m_\lambda({\bf \mathcal{J}}) = \sum_{\lambda}^\infty z^{|\lambda|} q_\lambda({\bf \mathcal{J}}, t) m_\lambda({\bf c}). \label{HL_generating_element} \end{equation} Applying $L(t, {\bf c}, z\mathcal{J}),\in \mathbf{Z}(\mathbf{C}[S_n])$ to the orthogonal idempotents $\{F_\lambda\}$ and the cycle-type sums $\{C_\mu\}$ as above, the corresponding hypergeometric $2D$ Toda $\tau$-function becomes \begin{eqnarray} \tau^{L(t,{\bf c}, z)}({\bf t}, {\bf s}) &\&= \sum_\lambda r_\lambda^{L(t,{\bf c}, z)} s_\lambda({\bf t}) s_\lambda({\bf s}) \\ &\&= \sum_{d=0}^\infty z^d\sum_\lambda F^d_{L(t,{\bf c})}(\mu, \nu) p_\mu({\bf t}) p_\nu({\bf s})\\ \end{eqnarray} where \begin{equation} r_\lambda^{L(t,{\bf c}, z)} := \prod_{(ij) \in \lambda} \prod_{k=1}^\infty { 1 - t z (j-i) c_k \over 1 - z (j-i) c_k} = \prod_{k=1}^\infty (-t)^{|\lambda|} {(- 1/(tz c_k))_\lambda \over (- 1/(z c_k))_\lambda } \end{equation} and \begin{equation} F^d_{L(t,{\bf c})}(\mu, \nu) := \sum_{|\lambda|=d}q_\lambda({\bf c}, t) m^\lambda_{\mu \nu} \end{equation} is again the weighted number of paths in the Cayley graph of $S_n$ generated by transpositions, with weight $q_\lambda({\bf c},t)$ for a path of signature $\lambda$. We also have \begin{equation} F^d_{L(t,{\bf c})}(\mu, \nu) = \sum_{e=0}^d H^{(d,e)}_{L({\bf c})} (\mu, \nu) t^e \end{equation} where \begin{equation} H^{(d,e)}_{L({\bf c})}(\mu, \nu) := z_\nu \sum_{l=0}^{d} {\hskip -35 pt} \sideset{}{'} \sum_{\substack{\{\mu^{(i, u_i)}, \nu^{(i, v_i)}\} , \ k_i\ge 1, \ \tilde{k}_i \ge 1\\ \sum_{i=1}^l \sum_{u_i =1}^{k_i}\ell^*(\mu^{(i,u_i)}) = e, \\ \sum_{i=1}^l\left( \sum_{u_i =1}^{k_i}\ell^*(\mu^{(i, u_i)} ) + \sum_{v_ i =1}^{\tilde{k}_ i}\ell^*(\nu^{( i, v_ i)})\right) =d}} {\hskip - 20 pt} {\hskip-50 pt} (-1)^{K+d-e} H(\{\mu^{(i, u_i)}\}_{\substack{u_i = 1, \dots, k_i \\ i =1, \dots , l}} , \{\nu^{( i, v_ i)}\}_{\substack{v_ i = 1, \dots, \tilde{k}_ i \\ i =1, \dots , l}}, \mu, \nu), \label{H_de_c} \end{equation} with \begin{equation} K:= \sum_{i=1}^l (k_i +\tilde{k}_i) \end{equation} the total number of branch points. $H^{(d,e)}_{\bf c}(\mu, \nu)$ is the weighted generalization of the multispecies hybrid signed Hurwitz numbers studied in \cite{HO2}. As in the general Macdonald case, $H^{(d,e)}_{L({\bf c})}(\mu, \nu)$ is the weighted number of $n$-fold branched coverings of $\mathbf{C} \mathbf{P}^1$ with a fixed pair of branch points with ramification profiles $(\mu, \nu)$, and $K$ additional branch points divided into two classes I and II, denoted $\{\mu^{(i,u_i)}\}_{u_i =1, \dots , k_i}$ and $\{\nu^{(i, v_ i)}\}_{v_ i= 1, \dots, \tilde{k}_i}$, respectively, which are further subdivided into $l $ species, or ``colours'', labelled by $i=1, \dots l$, the elements within each colour group distinguished by the labels $(u_i =1, \dots , k_i)$ and $(v_ i =1, \dots , \tilde{k}_ i)$. To such a grouping, we again assign a partition $\lambda$ of length \begin{equation} \ell(\lambda) = l \end{equation} and weight \begin{equation} d: = |\lambda| = \sum_{i =1}^l \left (\sum_{u_i =1}^{k_i}\ell^*(\mu^{(i, u_i)}) + \sum_{v_i =1}^{\tilde{k}_i}\ell^*(\nu^{(i, v_i)}) \right) = \sum_{i=1}^l d_i, \end{equation} whose parts $(\lambda_1\ge \cdots \ge \lambda_l > 0)$ are equal the total colengths \begin{equation} d_i := \sum_{u_i =1}^{k_i} \ell^*(\mu^{(i, u_i)}) + \sum_{v_i=1}^{\tilde{k}_i}\ell^*(\nu^{(i, v_i)}), \quad i=1. \dots, l \end{equation} in weakly decreasing order. \subsection{Jack polynomials ($ q= t^\alpha, \, t \rightarrow 1$)} \label{jack} Setting $q=t^\alpha$ and taking the limit $q \rightarrow 1$, we obtain the Jack polynomials \cite[Sec. VI.10]{Mac} $P^{(\alpha)}_\lambda$ as the limiting case of the MacDonald polynomials. These satisfy the orthogonality relations \begin{equation} \langle P^{\alpha}_\lambda, P^{\alpha}_\mu\rangle_\alpha= \delta_{\lambda \mu} z_\lambda (b_\lambda^{(\alpha)})^{-1}, \quad b_\lambda^{(\alpha)}:= \prod_{i=1}^{\ell(\lambda)} \prod_{j=1}^{\lambda_i} { \alpha(\lambda_i -j) +\lambda'_j -i +1 \over \alpha(\lambda_i -j) +\lambda_j -i +\alpha} \end{equation} with respect to the scalar product $\langle \ , \ \rangle_\alpha$ defined by \begin{equation} \langle p_\lambda, p_\mu\rangle_\alpha = \delta_{\lambda \mu} z_\lambda \alpha^{\ell(\lambda)}. \end{equation} This corresponds to the family of weight generating functions \begin{equation} J(\alpha, {\bf c}, z) :=\prod_{k=1}^\infty (1 - z c_i)^{-1/\alpha} \end{equation} and the corresponding family of central elements \begin{eqnarray} J(\alpha, {\bf c}, z \mathcal{J}) := \prod_{i=1}^\infty \prod_{a=1}^n(1 - z c_i \mathcal{J}_a)^{-1/\alpha} = \sum_{\lambda} z^{|\lambda|} g^{\alpha}_\lambda(\mathcal{J}) m_\lambda({\bf c}) = \sum_{\lambda} z^{|\lambda|} g^{\alpha}_\lambda({\bf c}) m_\lambda({\mathcal{J}}) , \end{eqnarray} where the symmetric functions $g^{\alpha}_\lambda({\bf x})$ are the analogs of the $e_\lambda({\bf x})$ or $h_\lambda ({\bf x})$ bases formed from products of elementary or complete symmetric functions in the case of Schur functions ($\alpha=1$), \begin{equation} g^{\alpha}_\lambda({\bf x}) = \alpha^{\ell(\lambda)}\prod_{i=1}^{\ell(\lambda} P^{(\alpha)}_{(\lambda_i)}({\bf x}), \end{equation} The content product coefficients entering in the double Schur function expansion of the associated hypergeometric $2D$ Toda $\tau$-functions \begin{equation} \tau^{J(\alpha, {\bf c}, z)}({\bf t}, {\bf s}) = \sum_{\lambda} r_\lambda^{J(\alpha, {\bf c}, z)} s_\lambda({\bf t}) s_\lambda({\bf s}) \end{equation} in this case are \begin{equation} r_\lambda^{J(\alpha, {\bf c}, z)} = \prod_{(ij)\in \lambda} \prod_{k=0}^\infty (1- z (j-i) c_k)^{-1/\alpha} = \prod_{k=1}^\infty (1-zc_k)^{|\lambda|\over \alpha} (-1/z c_k)_\lambda^{-1/\alpha}. \end{equation} The expansion in the basis of products of power sum symmetric functions is therefore \begin{equation} \tau^{J(\alpha, {\bf c}, z)} ({\bf t}, {\bf s}) = \sum_{d=0}^\infty \sum_{\substack{\mu, \nu \\ \abs{\mu} = \abs{\nu}=n}} z^d F^d_{J(\alpha, {\bf c})}(\mu, \nu) p_\mu({\bf t}) p_\nu({\bf s}) \label{tau_GJ_F} \end{equation} where \begin{equation} F^d_{J(\alpha, {\bf c})} (\mu, \nu) = \sum_{\lambda} g^{\alpha}_\lambda({\bf c})m^\lambda_{\mu \nu} \end{equation} is the combinatorial Hurwitz number giving the weighted number of $d$-step paths of signature $\lambda$ in the Cayley graph of $S_n$ , starting in the conjugacy class $\cyc(\mu)$ and ending in $\cyc(\nu)$, with weight $g^{\alpha)}_\lambda({\bf c})$. We again have \begin{equation} F^d_{J(\alpha, {\bf c})} (\mu, \nu) = H^d_{J(\alpha, {\bf c})} (\mu, \nu) , \end{equation} where the weighted geometrical Hurwitz number is \begin{equation} H^d_{J(\alpha, {\bf c})} (\mu, \nu) := \sum_{k=0}^\infty \left({-{1\over \alpha} \atop k}\right) \sum_{\substack{\mu^{(1)}, \dots , \mu^{((k)} \\ |\mu^{(i)}|=n \\ \sum_{i=1}^k \ell^*(\mu^{(i)})=d }} m_\lambda({\bf c}) H(\mu^{(1)}, \dots , \mu^{(k)}, \mu, \nu), \end{equation} with the sum over partitions $\lambda$ of length $k$, and weight $d$ whose parts are $\{\ell^*(\mu^{(1)}), \dots, \ell^*(\mu^{(k)})\}$. \bigskip \bigskip \noindent {\small {\it Acknowledgements.} This work extends the approach to the construction of parametric families of $\tau$-functions as generating functions for weighted Hurwitz numbers initiated jointly with M. Guay-Paquet and extended to the multispecies case with A. Yu. Orlov. The author is indebted to both for helpful discussions that helped clarify many of the ideas and methods underlying this approach. } \bigskip \newcommand{\arxiv}[1]{\href{http://arxiv.org/abs/#1}{arXiv:{#1}}}
{ "timestamp": "2016-11-22T02:03:54", "yymm": "1504", "arxiv_id": "1504.03311", "language": "en", "url": "https://arxiv.org/abs/1504.03311" }
\section{Introduction} \subsection{Sagittarius A*} Sagittarius A* (Sgr A*) is the name given to the bright radio source of our Galactic Center. It was discovered in 1974 by \citet{balick74} using the Green Bank 35 km radio link interferometer of the National Radio Astronomy Observatory. Stellar motion around the non-thermal radio source shows that Sgr A* is highly compact (smaller than 0.01 pc i.e. $3\times 10^{11}$km) and that the stars orbit around a point mass of $4.3\pm0.5\times10^6M_\odot$ \citep{eisenhauer05, melia07}. The stellar orbits provide the strongest evidence yet for a supermassive black hole located in the center of our Galaxy at a distance of $8.3\pm0.35~{\rm kpc}$ \citep{reid93, schodel02, ghez08, gillessen09}. Super-massive black holes (SMBHs) of millions to billions of solar masses are believed to exist in the centre of most galaxies. Sgr A*, in our own galaxy, is the closest and best studied SMBH, making it the perfect source to test our understanding of galactic nuclei systems in general. But among all galactic nuclei that we have observed so far, Sgr A* has the peculiarity of being very faint in all wavelengths. Even though it may have been more active in the past \citep{revnivtsev04, zubovas12, ponti12}, today Sgr A* is one of the most under-luminous SMBHs we know, it is very faint with $L_{\rm bol}\simeq 10^{-9} L_{\rm Edd}$ \citep{narayan98} and accreting at a very low rate. The accretion rate has been constrained by polarisation measurements, using Faraday rotation \citep{aitken00, bower03, marrone07} and is estimated to be in the range $2\times10^{-9}<\dot{M}<2\times10^{-7} M_{\odot}~{\rm yr^{-1}}$. Theoretical work suggest that Sgr A* is most likely accreting at the lower range of this interval \citep{moscibrodzka09, drappeau13}. \citet{dibi12} have shown that for accretion rate above or equal to $1 \times 10^{-8}$ solar masses per year, the cooling losses become important in the modelling of the accretion flow and the resulting spectrum. This result means that Sgr A* is the only black hole source known where cooling could still be treated separately as a first approximation. Along the same lines, \citet{yuan04} had shown that flare events, as these observed in the Galactic Centre, could not be detected if the accretion rate increases by a factor 10 from its actual value, because synchrotron self Compton emission from thermal electrons would increases substantially. This would explain why Sgr A* is the only source known to exhibit flaring activity. Also, \citealp{yusefzadeh09} are reporting observational evidence for IR flaring activity inversely proportional to the flux density. \subsection{Multiwavelength observations} The faint emission from Sgr A* has been observed in different wavelengths giving us a broad band spectrum of this object from the radio to the X-ray (see reviews by \citealt{ melia01, genzel10, morris12}, and references therein). From a few GHz up to 100 GHz, the radio spectrum extends as a rough power law $F_{\nu } \propto \nu ^{\alpha }$ with 0.25 $\leq \alpha \leq$ 0.33. Above 100 GHz there is evidence for a millimeter/sub-millimeter (sub-mm) excess over the power law, extending almost to 1000 GHz. The nature of this excess was discussed by \citet{serabyn97} and \cite{falcke98} who excluded the possibility of dust emission. The size of photosphere is predicted to be the smallest around this wavelength of 1.3 mm, where the excess is observed. And the black hole horizon, or its shadow \citep{falcke00,dexter10} could be detected for the very first time in the near future with new, very long base interferometry facilities such as the ``Event Horizon Telescope'' \citep{doeleman08, doeleman09, fish11}. Important progress has been achieved in the mid-infrared (MIR) to near infrared \citep[NIR; e.g.,][]{genzel03, ghez05, schodel11} and sub-mm domains. But in the optical and in the ultra-violet wavelengths, the Galactic Center is heavily obscured by gas and dust with 30 magnitudes of visual extinction. This obscuring medium becomes partially transparent to the X-rays at energies above 2 keV. Indeed, Sgr A* has a quiescent X-ray luminosity of a few $10^{33} \rm erg \ s^{-1}$ (\citealt{baganoff03}) which is about $10^{11}$ times lower than the Eddington luminosity. Sgr A* is quite variable and we observe fast activity (bursts or flares) in the infrared and X-ray band emissions. A few times a day, Sgr A* experiences rapid increases in the NIR flux \citep{hornstein02, genzel03, ghez04, eckart06, eckart08, yusefzadeh08, doddseden11, haubois12}, where brighter flares \citep[$>10$ mJy;][]{doddseden11} are often associated with simultaneous X-ray flares \citep[e.g.][]{baganoff01,goldwurm03,porquet03,belanger05,porquet08, nowak12, neilsen13}. The typical timescale for such events is few thousand seconds, suggesting a common localized origin of the flares. The radio and sub-mm emissions are more stable, i.e. they show less variability than the X-ray and NIR (see for instance \citealt{marrone08} for the sub-mm flares, and \citealt{yusefzadeh10} for a study on the IR - sub-mm anti-correlation). The flat radio emission is most likely the synchrotron emission originating from an outflow of Sgr A*, the lower the frequency, the further away we are in the outflow. So the radio wavelength, as well as the quiescent X-ray emission are coming from extended regions around Sgr A*, while the sub-mm, MIR, and flaring X-ray emissions originate from a region very close to the SMBH. This second region is the one we are interested in and we explore in this paper. The MIR and NIR emission has been observed by the VLT and Keck (e.g. \citealt{doddseden11,schodel11,bremer11,witzel12}). The X-ray variability has been observed by XMM-Newton, the Chandra X-ray observatory (e.g. \citealt{baganoff01}) and also by \textit{Swift} \citep{degenaar13}. Many new X-ray flares have been detected recently thanks to the Chandra 2012 Sgr A* X-ray Visionary Project\footnote{http://www.sgra-star.com/} . From this 3-Ms campaign, 39 X-ray flares are reported, lasting from a few hundred seconds to approximately 8 ks, and ranging in 2--10 keV luminosity from $10^{34} \rm ergs \ s^{−1}$ to $2 \times 10^{35} \rm ergs \ s^{−1}$ \citep{nowak12, neilsen13}. The new telescope NuSTAR \citep{harrison13} has released recently new X-ray flares data that we are using in our study (Barri\`ere et al., submitted). Those show that the 3--80 keV emission is compatible with a pure power-law spectrum. \subsection{Flare models} The fast variability indicates that the origin of the flares is as close as few gravitational radii from the SMBH. However, the nature of the physical processes responsible for the flares is still an open question. Different mechanisms have been proposed such as events of magnetic reconnection or other acceleration processes (e.g., \citealt{markoff01, yuan03, liu04, liu06}), infall of gas clumps or disruptions of small bodies \citep{cadez06, tagger06, zubovas12b}, adiabatic expansion of hot plasma or hot spot models \citep{yusefzadeh08, broderick06}. By modelling the physical conditions around Sgr A* and fitting the observational data, we also aim at giving a possible interpretation of the phenomenon. Several studies have been devoted to the modelling of SgrA* flares. Some models include a precise description of the flow geometry. For instance, the emission from SgrA* was interpreted in the framework of radiatively inefficient accretion flows \citep{yuan03}. In this model, the matter properties (density, temperature, etc.) are computed through hydrodynamical equations including radiative losses. These properties depend on the distance to the black hole and each ring contributes differently to the overall spectrum. The outer parts of the accretion flow contribute significantly to the X-ray luminosity in the quiescent state (\citealt{quataert02, baganoff03}) with only about ten per cent of the quiescent X-ray flux coming from the central part we are modelling (\citealt{wang13, neilsen13}). However, even in models where the geometry is dealt with accuracy, most of the emission originates from the very central parts of the accretion flow, both in the quiescent sub-mm and NIR bands, and in the flaring sub-mm to X-ray bands. Moreover, the typical time scale of a flare (few thousand seconds, depending on the flare) is of the order of the orbital period at the inner most stable orbit of SgrA*, pointing again to a flaring region of only a few gravitational radii ($r_G= GM/c^2$). Therefore, most attempts to model the sub-mm to X-ray spectrum of SgrA* in the quiescent and flaring states (excluding the radio emission) implicitly assume that the emission originates from a single homogeneous, isotropic zone characterized by only few parameters such as the average electron temperature and density, the magnetic field intensity (e.g. \citealt{ doddseden10, liu06}). Here we use the same approach. Most models agree on the the fact that the small emitting region is weakly magnetized ($\lesssim$ few hundred Gauss) and faint ($\lesssim 10^8$ particles/cm$^3$) which now appear as standard values (\citealt{dibi12,moscibrodzka09,dexter09}). These values are supported by observations that constrain the accretion rate via Faraday rotation, to a maximum of $\dot{M} \sim 10^{-7}$ solar masses per year. As a simple check, taking this higher accretion rate limit and a ``typical" bulk velocity at one or two gravitational radii of 10$\%$ of the speed of light (as simulated in GRMHD models of Sgr A*), then $\rho_{max} \simeq \dot{M}/(4 \pi R^2 v ) \sim 10^8 \rm particle/cm^3$. Whatever the details of the accretion flow and the radiative processes responsible for the emission, the emitted spectrum depends drastically on the particle distribution. For the sake of simplicity, all models so far have assumed pre-determined particle distributions (Maxwellian, power-law, broken power-law, or a combinations of them) which are described by few parameters. The precise shape of the particle distributions depends on the radiative and acceleration processes and can deviate significantly from the assumed ones. The present work aims at dealing more precisely with particle distributions. Fitting arbitrary distributions to the data is not possible with current coverage and sensitivity of the instruments. Rather, the shape of the particle distribution can be computed self-consistently with a Boltzmann equation assuming a physics described by a few parameters. Such an approach is common in the modelling of the high energy emission from X-ray binaries and other AGN (e.g. \citealt{mcconnell02, rogers06, belmont08}) but has not been applied to SgrA* yet. In this paper, we present spectra obtained by solving simultaneously an equation for particles and an equation for photons to produce self-consistent particle distributions and spectra. These spectra are compared to broad band data of SgrA* to put constraints on the flare properties. This paper is organised as follows: In Section 2 we present the microphysics and numerical method. In Section 3 we present the results and solutions for the quiescent and flaring spectra from Sgr A*. We study the plasma behaviour in two kinds of models, namely in systems where matter is trapped in the emission region, and in systems where matter flows in and out of the emission region. In Section 4 we end with our conclusions and outlook. \section{Method} The goal of this study is to model the plasma around Sgr A*, and in particular the particle distributions and the resulting spectra. In the following, we will note $\nu$ the frequency of photons, $\gamma$ the particles Lorentz factor , and $p=(\gamma^2-1)^{1/2}$ the particle momentum. We use the {\sc belm} code \citep{belmont08}. This numerical tool solves simultaneously coupled kinetic equations for leptons and photons in a magnetized, uniform, isotropic medium of typical size $R$. In all models presented here, this size is set to $R=2 \ r_G = 1.3 \times 10^{12}$ cm based on the size derived from the flare time scale variability (where $r_G=GM/c^2$ is the gravitational radius). The implemented microphysics includes radiation processes as self-absorbed radiation, Compton scattering, self-absorbed bremsstrahlung radiation, pair production/annihilation, coulomb collisions, and prescriptions for particle heating/acceleration. \subsection{Radiative processes} Synchrotron radiation is produced by charged particles spiraling around magnetic field lines. It depends on the magnetic field $B$ whose intensity is described by the magnetic compactness: \begin{equation} l_B=\frac{\sigma_T R}{m_e c^2}\frac{B^2}{8\pi} \label{lb} \end{equation} where $m_e$ is the electron mass, $c$ is the speed of light, $\sigma_T$ is the Thomson cross section, and $R$ is the size of the emission region. Synchrotron emission at frequency $\nu$ from a single electron with momentum $p$ is characterised by the emissivity $j_s(p, \nu)$ in erg s$^{-1}$ Hz$^{-1}$\citep{ghisellini88,ghisellini98, Katarzynski06b}. Synchrotron emission typically produces soft photons and cools the high energy emitting particles. The cooling time of relativistic particles emitting at frequency $\nu$ is: \begin{equation} t_{\rm synch}=1.29\times 10^{12} \times \nu ^{-1/2} \times B^{-3/2} \ (s) \label{synch} \end{equation} For typical values of B, we have $$t = 1.3 \ (\nu/10^{18}\rm Hz)^{-1/2} (B/100 \rm G)^{-3/2} \ s $$ This corresponds to very short time scales, and only particles emitting at frequency lower than $10^{12}$ Hz cool on time scales comparable or longer than the duration of a typical flare (1000s). These particles are not observed to contribute much to the total emission. Moreover in our study we are not modeling the emission below $10^{12} Hz$ that is extended radio emission from the outflow. Low energy particles can also absorb photons through the synchrotron process. Such absorption is described by the absorption cross section $\sigma_s(p,\nu)$ \citep{crusius86,ghisellini98}. The joint effect of high energy particle cooling and low energy particle heating tends to thermalize the particle distributions. It is called the {\it synchrotron boiler effect} \citep{ghisellini88}. Photons of the emission region can also be scattered by Compton interactions. The scattering of isotropic photons of energy $h\nu _0$ by isotropic particles of energy $E_0=\gamma_0 m_e c^2$ is characterized by the resulting distribution of scattered photons $\sigma_c(p_0, \nu_0\rightarrow \nu)$. The {\sc belm} code uses the exact, Klein-Nishina cross section \citep{jones68,belmont09}. In the case of SgrA*, soft photons are up-scattered by high energy particles which brings them to high energy. This also cools the scattering particles. From equation (37) of \citet{piran04}, the typical inverse Compton cooling time is: \begin{equation} t_{\rm comp} = 3.1 \times 10^{10} \times \nu^{-1/4} \times B^{-7/4} \ (s) \label{compt} \end{equation} Again, this time scale is much shorter than the flare duration. The effect of self-absorbed bremsstrahlung radiation is also computed. However for the inner most regions of the accretion flow, bremsstrahlung emission is a negligible component of the resulting spectra and it will not be discussed on this paper. Photon-photon pair production and pair annihilation are also implemented in the code. However, we aim at modelling the emission from SgrA* only below 100 keV where these processes are negligible. They were disabled in order to reduce the computation time. Rather than computing the path of photons out of the emitting region with Monte Carlo simulations, photons produced in-situ are assumed to escape with a probability representative of the geometry. This probability depends on the photon energy. For instance, high energy photons do not inverse Compton scatter and can escape freely when the optical depth is large, low energy photons can be scattered so much that they remain trapped in the system much longer. We use the escape rate from \citet{lightman87, coppi00} that reproduces well the results of Monte Carlo simulations in a spherical geometry. At low energy, synchrotron and bremsstrahlung can absorb photons before they escape. This modifies the escape probability in this energy range. We include the corresponding modifications to escape probability derived from \citet{sobolev57} \citep[see also][]{poutanen09}. \subsection{Particle acceleration and heating} The particle distribution depends on the above mentioned radiative processes and on several additional processes. Coulomb collisions tend to thermalize the particle distributions. The {\sc belm} code include Coulomb cross sections derived from \citet{nayakshin98}. However, the very low density inferred for SgrA* make this process very inefficient. In all the results shown in this paper, real Coulomb collisions are completely negligible. In order to account for the observed high energy radiation, particles needs to be heated/accelerated to high energy. Solving for the particle distribution thus requires to address also the physics of particle acceleration/heating. Many processes have been proposed to account for high energy particles (viscosity, reconnection, shocks, first and second order Fermi processes, etc.). However, the precise process at work in SgrA* is still unknown. Moreover the physics of these processes is not constrained well enough to have a precise modelling for their effect on the particle distribution. Only stochastic acceleration by MHD waves can be implemented easily in a Boltzmann equation for the particle distribution (see \citealt{liu06} for an application to SgrA*). However, if particle escape is slow, it forms a quasi-Maxwellian distribution and does not reproduce hard non-thermal distributions such as the one observed by NuSTAR. If particle escape is efficient, it can produce power-laws only if the accelerating rate has the same energy dependence as the radiative cooling, which is very unlikely \citep{katarzynski06}. Therefore we use very general, ad-hoc prescriptions, inspired from what is done for the corona of accreting black holes (such as {\sc Eqpair}, \citealt{coppi00} ; or {\sc belm}). We use two different channels to provide energy to the particles. \begin{itemize} \item We mimic thermal processes by computing Coulomb collisions with a virtual population of hot protons (with temperature $k_BT_p = 40 $ MeV). Real collisions are very inefficient and do no provide significant heating, whatever the proton temperature. Rather, this prescription aims at reproducing the effect of anomalous processes (such as viscosity) on the lepton distribution. Therefore, the heating efficiency is renormalised by an arbitrary constant so that to inject power $L_{\rm th}$ (erg s$^{-1}$) into the emitting region. In the following, this free parameter will be described by the compactness parameter $l_{\rm th} = \sigma_TL_{\rm th}/(Rm_ec^3)$. Such prescription not only heats the global distribution of particles. It also thermalises it. As the efficiency of the virtual collisions is enhanced, the efficiency of the associated thermalisation is also enhanced to an anomalous level. Anomalous heating is a common feature of accreting systems, so such heating is not surprising even though the origin is debatable. \item We model non-thermal processes by constantly injecting particles with a power-law distribution $N(\gamma)\propto \gamma^{-s}$. This distribution is characterised by 4 parameters: the slope $s$, the minimal and maximal energies $\gamma_{\rm min}$ and $\gamma_{\rm max}$ respectively, and the normalisation. As we want this process to keep the number of particles constant, the re-injected particles are taken from the lepton population itself, with a uniform probability, independent of their energy. In the following, the minimal energy of the power-law will be set to $\gamma_{\rm min} = 50$ , so that particles are accelerated from the bulk of the distribution. Indeed, the thermal peak of the spectrum (around $10^{12}$ $Hz$) imply an electron temperature around $10^{11}$ $K$. And the maximal energy of accelerated particles will be set to $\gamma_{\rm max}= 4.6 \times 10^5$, large enough to reproduce the NuSTAR data. The high energy cutoff has not been confirmed by NuSTAR observations (Barri\`ere et al., submitted), and the possible physical processes responsible for the non-thermal component (turbulent acceleration, reconnection, weak shocks) can accelerate electrons to very high energies. The normalisation is computed so that the non-thermal process injects into the region a power $L_{\rm nth}$, described by the free parameter $l_{\rm nth} =\sigma_TL_{\rm nth}/(Rm_ec^3)$. The slope is also a free parameter of the model. \end{itemize} Such prescriptions compete with all other processes to produce complex distributions of particles. \subsection{Modelling the particle dynamics} Although observational evidence clearly indicates that the sub-mm to X-ray emission originates from a very small region close to the black hole, the dynamics of the particles is very uncertain. Free, relativistic particles can travel though the emitting region in a light crossing time: \begin{equation} t = R/c = 43 \ s \end{equation} which is much shorter than the flare duration. However, the medium is magnetised so that particles are not free to move on straight trajectories. Instead, they are bound to the magnetic field lines. For magnetic intensity of 1-100 G, even X-ray emitting particles have gyro-radii orders of magnitude smaller than the emitting region. Therefore, if the medium is turbulent and the magnetic field tangled, even the highest energy particles can be considered as trapped in the main flow. The detailed structure of the accretion flow and in particular the accretion velocity are not known. Therefore we investigate two extreme scenarios. \subsubsection{The closed system approximation} On the one hand we consider that the accretion velocity is very small. In that case, particles remain a very long period of time in the emitting region. Radiative and acceleration processes set steady distributions of particles and spectra before particles escape from the system. This model is characterised by 5 free parameters: the lepton density $n_e$ related to the Thomson optical depth $\tau$ by $\tau = n_e \sigma_T R$, the magnetic field compactness $l_B$ defined by Eq. \ref{lb}, the power of the thermal heating and non-thermal acceleration characterized by the compactness parameters $l_{\rm th}$ and $l_{\rm nth} $ respectively, and the slope of the non-thermal heating process $s$. In this model without particle escape, the particle distribution results from the balance between thermal heating, non-thermal acceleration, and radiative cooling. At high energy, thermal heating is inefficient, and the particle distribution results from the balance between radiative cooling and non-thermal acceleration. In good approximation, Compton and synchrotron processes have the simple cooling laws shown in Eq. \ref{compt} and \ref{synch}. As acceleration tends to produce an electron power-law distribution of index $s$, the steady distribution is also a power-law with index $s'= s +1$. When synchrotron radiation is the dominant process, this produces a synchrotron spectrum of spectral index $\alpha = s' /2$. At lower energy, the physics and the shape of the particle distribution are more complex. We solve this model numerically for different parameter sets presented in Figure \ref{Quiescent00}, \ref{Flare00c}, and \ref{Flare00s}. \subsubsection{The open configuration} On the other hand, we also consider the extreme case where matter flows in and out of the emitting region with an accretion velocity approaching the speed of light. In that case, particles escape the emitting region on time scales comparable to the light crossing time, i.e. comparable also to the radiative times scales. Escape can therefore compete efficiently with radiation and acceleration processes. This model will be referred to as the {\it open configuration}. The distribution of the matter entering the emitting region needs to be given $\dot{N}_{\rm inj}(\gamma)$. We assume that non-thermal acceleration occurs only in the emitting region, so that particle entering this region have a thermal distribution described only by 2 parameters: its temperature $\theta_{\rm inj} = k_BT_{\rm inj}/(m_ec^2)$ and its normalisation. The former is set to $\theta_{\rm inj}= 13$ in all models. The latter is described by the injection compactness: \begin{equation} l_{\rm inj} =\frac{4\pi}{3} \frac{R^2 \sigma _T}{c}\int \gamma \dot{N}_{\rm inj} d\gamma \approx 3 \theta_{\rm inj} \frac{R^2 \sigma _T}{c} \dot{n}_{\rm inj} \label{ParticleInjCompactness} \end{equation} where $\dot{n}_{\rm inj}$ is the total number of particles injected into the emitted region per unit time, and the last equality holds for thermal distributions with relativistic temperature ($\theta_{\rm inj} >> 1$). Once in the emitting region, particles are assumed to escape on a typical time scale $t_{\rm esc}=R/c$, which corresponds to an escape probability $p_{esc} = R/(c \ t_{\rm esc}) = 1$. This model is described by 4 free parameters: the magnetic field compactness $l_B$, the non thermal compactness $l_{\rm nth}$ and the slope of the power-law $s$, and the injection compactness $l_{\rm inj}$. The particle density is no longer a free parameter and results from the balance between injection and escape. When injected particles have a relativistic temperature, the steady-state optical depth is: \begin{equation} \tau_{T}=\frac{1}{4\pi}\frac{l_{\rm inj}}{\theta_{\rm inj}} \label{Tau} \end{equation} In that model, the steady particle distribution results from the balance between injection and non-thermal acceleration versus both escape and radiative cooling. In this configuration, the shape of the steady state distribution is more complex than in the closed system. At high energy, particles cool before they escape. As in the closed system, the leptons form a power-law distribution of index $s' = s +1 $, and emit a power-law synchrotron spectrum of spectral index $\alpha = s' /2$. At low energy, particles escape before they cool and the steady state electron distribution is a power-law of slope $s'=s$. This produces a synchrotron spectrum of spectral index $\alpha = s/2$. The particle distribution and photon spectrum thus exhibit a break, whose energy depends on the relative efficiency the cooling and escape. As far as synchrotron radiation is the dominant cooling process, the break in the photon spectrum is: \begin{multline} \nu _{break}=2.97 \times 10^{14} \left( \dfrac{s-1}{3.6 -1}\right)^{-2}\left(\dfrac{T_{esc}}{R/c}\right)^{-2}\\ \left(\dfrac{R}{3 \times 10^{12}}\right)^{-2} \left(\dfrac{B}{50G}\right)^{-3} \ \rm Hz \label{CoolingBreak} \end{multline} Such a model was for instance proposed by \citet{doddseden10} to explain the flaring X-ray emission without violating the NIR upper-limits. They used a broken power-law distribution. However, depending on the parameters, processes other than synchrotron emission can contribute to the physics. Also, a self consistent cooling break is not sharp and extend over a significant frequency range. Here we extend their conclusion by solving self-consistently for the particle distribution. \begin{table*} \centering \begin{minipage}{170mm} \caption{\textit{Description of simulations}}\footnotetext[0]{\textit{The first column gives the Figure's number. The second one gives the state of the spectrum we are trying to reproduce (in quiescence or during a flare) and the configuration of the model (closed or open region). $B$ is the magnetic field which is a free input parameter. $n_e$ is the density which is also an input parameter in the closed configuration cases (Figure \ref{Quiescent00}, \ref{Flare00c}, and \ref{Flare00s}) but an output of the simulations in the other cases. $l_{\rm th}$ is the prescription for thermal heating, and $l_{\rm nth}$ is the prescription for non-thermal heating/acceleration . $l_{\rm inj}$ is the injection of particles for the open configuration cases. $s$ is the spectral index used in the non-thermal prescription for acceleration. $P_{esc}$ gives the probability for particles to escape from the system. $\epsilon_k/\epsilon_b$ is the ratio of kinetic energy over magnetic energy ($8\pi \int mc^2(\gamma - 1)N_{\gamma}d_{\gamma} / B^2$) and is computed after the different models have run. $\rm T_e$ is the temperature of the thermal part of the spectrum. Because our lepton distributions are calculated self-consistently we never have a perfect thermal distribution, but this temperature corresponds to the closest match between the real distribution and the pure Maxwellian (plotted in dotted lines in all our distributions as a comparison), this is of course an output of our models as well. Finally, the last column gives the total luminosity of the spectrum. In the closed configuration case, this luminosity is the direct result of the thermal plus non-thermal compactness parameters, while in the open configuration it results from the balance between injection, escape, and cooling. We remind that in all our models we have $\rm R=2 r_G$, $\gamma_{min} = 50$, $\gamma_{max}=4.6\times 10^5$, and $\theta_{inj}=13$.} } \begin{tabular}{@{}lcccccccccccc@{}} \hline Model & spectrum & configuration & B & $n_e$ & $l_{\rm th}$ & $l_{\rm nth}$ & $l_{\rm inj}$ & $s$ & $p_{esc}$ & $\epsilon _k/\epsilon _b$ & $\rm T_e$&$L_{bol}$ \\ & & &\tiny{Gauss} & \tiny{part/$cm^3$} & \tiny{$\times 10^{-5}$} & \tiny{$\times 10^{-5}$} & \tiny{$\times 10^{-4}$} & & & & \tiny{$\times 10^{10}$ K} & \tiny{$\rm \times 10^{36} erg \ s^{-1}$} \\ \hline Figure \ref{Quiescent00} & \tiny{Quiescent} & \tiny{Closed} & 154.3 & $4.6\times 10^6$ & 2 & 1 & 0 & 3.60 & 0 & 0.15 & 11.4 & $1.4$\\ Figure \ref{Flare00c} & \tiny{Flare} & \tiny{Closed} & 48.8 & $4.6\times 10^6$ & 7& 3 & 0 & 2.40 & 0 & 6.32 & 35.0 &$4.8$\\ Figure \ref{Flare00s} & \tiny{Flare} & \tiny{Closed} & 48.8 & $4.6\times 10^6$ & 5 & 4 & 0 & 2.20 & 0 & 5.79 & 33.4 &$4.3$\\ Figure \ref{Quiescent33} & \tiny{Quiescent} & \tiny{Open} & 175.3 & $3.3\times 10^6$ & 0 & 1 & 4.64 & 3.60 & 1 & 0.08 & 7.84 &$1.0$\\ Figure \ref{Flare33s_J21} & \tiny{Flare} & \tiny{Open} & 175.3 & $3.3\times 10^6$ & 0 & 9.8 & 4.64 & 2.28 & 1 & 0.09 & 7.90 &$3.8$\\ Figure \ref{Flare33s_O17} & \tiny{Flare} & \tiny{Open} & 175.3 & $3.3\times 10^6$ & 0 & 11.4 & 4.64 & 2.13 & 1 & 0.09 & 7.87 & $4.8$\\ Figure \ref{Flare33c} & \tiny{Flare} & \tiny{Open} & 34.5 & $1.4\times 10^8$ & 0 & 100 & 200 & 2.60 & 1 & 97.16 & 7.79 &$5.8$\\ Figure \ref{Quiescent33_x3} & \tiny{Quiescent} & \tiny{Open} & 175.3 & $9.9\times 10^6$ & 0 & 1 & 14 & 3.60 & 1 & 0.24 & 7.86 & $3.8$ \\ \hline \end{tabular} \end{minipage} \end{table*} \section{Sgr A* resulting spectra} The flare duration varies but the typical time is about 3000 seconds. The radiative time scales and the thermalisation time are much smaller than the flare duration. The particle and photon distributions are in quasi steady state at each moment of the flare, and the flare evolution is directly governed by the evolution of model parameters, here namely the acceleration processes described here by the parameters $\rm l_{nth}$ and $\rm l_{th}$. Therefore we will mostly present and discuss the results from steady state simulations. I.e. we aim at reproducing separately the quiescent and flaring spectra by changing the value of only few parameters. Doing so, we can say that the flaring state is just a transition governed by the increase or decrease of few physical parameters. What is driving these changes is subject to interpretation, but knowing what needs to be modified should give us some good first insight into the physical processes at work. Table 1 summarises the characteristics of our different models. \subsection{Data} Sgr A*'s SED is made of different observations that are variable and in most cases have not been observed simultaneously in different wavelengths. We have chosen a set of data representative of the overall variation of the spectrum. In all figures, the black radio points are from \citet{falcke98} and \citet{zhao03}, the red radio points are recent ALMA observations from Brinkerink, Falcke et al. submitted). The black far IR upper limits are from \citet{serabyn97} and \citet{hornstein02}, the green MIR data are from \citet{schodel11}, the pink NIR lower point (in the quiescent spectra) and the cyan NIR upper point (in the flare spectra) are from \citet{ghez04, genzel03}, and \cite{doddseden11}. The green ``bowtie'' is from \citet{bremer11} and is one of the few slopes that has been observed so far in the IR. Several flare observations seem to be consistent with this value of the NIR spectral index around $-0.6 \pm 0.2$ \citep{ghez05,gillessen06,hornstein07,bremer11}. The black ``bowtie'' in the quiescent X-ray is from \cite{baganoff01, baganoff03} and is an upper limit for the central emission because it is contaminated by thermal bremsstrahlung from the outer accreting matter. The orange ``bowtie'' is a Chandra flare from \citet{nowak12}. Finally the blue data points (dark and light blue) are two flares observed with NuSTAR on July 21st and October 17th 2012 respectively \citep{barriere14}. Even-though the X-ray flares may seem different in shape and slope, they are both acceptably fit by an absorbed power-law, and their photon indices are not significantly different ($2.23^{+0.24}_{-0.22}$ and $2.04^{+0.22}_{-0.20}$ for the July 21st and October 17th flares, respectively). Barri\`ere et al. 2014, investigated the presence of a cutoff in the October 17th flare, but found that it is not required by the data. The other wiggles in this spectrum (one may see a "V" shape in the low energy part of the spectrum) are not significant either. One need to keep in mind that the error bars show the 1-sigma confidence range, which means that an acceptable fit does not need to go through them all but 3 out of 9. Three X-ray flares are shown on the first flaring spectra (Figure \ref{Flare00c} and \ref{Flare00s}), but then we consider only the modelling of one of the NuSTAR flares that extend to higher energies. For a better reproduction of the spectrum, we need to move to the ``open configuration'' where we present some possible spectra for the July flare or the October flare. \subsection{Sgr A* spectra from a closed region} \begin{figure} \includegraphics[scale=0.35]{QuiescentFit00_bis} \includegraphics[scale=0.35]{QuiescentLepton00_bis} \caption{\textit{Quiescent spectrum from Sgr A* (top panel) and the associated lepton distribution (bottom panel) in a closed system configuration. On the spectrum, the black data points are taken from \citet{yuan03} with the X-ray ``bowtie'' corresponding to an upper limit for the quiescent state of Sgr A*. The data points on the spectrum are described in details in section 3.1. The blue curve component of the spectrum corresponds to the synchrotron process, while the red one corresponds to the Compton process. The Bremsstrahlung contribution is not visible in the scale of this plot. On the electron distribution, the solid line is the shape of the calculated distribution from which the spectrum comes from, while the dotted lines indicate a pure Maxwellian plus power-law components for comparison.}} \label{Quiescent00} \end{figure} \begin{figure} \includegraphics[scale=0.35]{Quiescent00Fix_bis} \caption{\textit{Quiescent spectrum from Sgr A* resulting from the``standard'' distribution consisting of a simple Maxwellian plus power-law (dotted line in the bottom panel of Figure \ref{Quiescent00}). }} \label{Fix} \end{figure} Figure \ref{Quiescent00} shows a spectrum for the quiescent state of Sgr A*, together with the emitting steady state lepton distribution. For this first spectrum, we consider a density of $4.6\times 10^{6}$ particles per cubic centimetre (which corresponds to $\tau=4 \times 10^{-6}$) and we keep this density to study the closed region, i.e. we consider that the number of particles is kept constant in the quiescent and flaring states. In quiescence, the magnetic field is 154.3 Gauss, the plasma is magnetically dominated with $\epsilon _k/\epsilon _b=0.15$. The thermal heating is twice the non-thermal one and corresponds to $9.6\times 10^{35}$ erg $\rm s^{-1}$ and $4.8\times 10^{35}$ erg $\rm s^{-1}$ respectively, so that the total emission reaches $1.4 \times 10^{36}$ erg $\rm s^{-1}$ in quiescence. We can see in the resulting spectrum in Figure \ref{Quiescent00} that the non-thermal component is not dominant, and this is not only due to the low value of $\rm l_{nth}$ but also because of the steep injected slope $s=3.6$. Nevertheless, this non-thermal component is important in order to reproduce the lower NIR data point. The thermal part contributes mainly to the sub-millimetre bump. In this case, we can notice how the thermal part of the lepton distribution differs from the standard Maxwellian shape on the bottom panel of Figure \ref{Quiescent00}. Indeed, for particle energies around $ p=10^2 mc$, the difference between the calculated distribution and the standard shape in dotted line, can reach almost two orders of magnitude. The steady state particle distribution is sharper than a pure Maxwellian. Above $\gamma = 100$, synchrotron cooling overcomes the anomalous thermalisation, and the distribution cuts more sharply than a pure thermal one. This also produces a sharper sub-mm bump as is illustrated in the resulting spectrum (top panel of Figure \ref{Quiescent00}). On Figure \ref{Fix} we plotted the spectral shape resulting from the dotted line of Figure \ref{Quiescent00}. We can see that the quiescent spectrum would be much wider, reaching the far-IR upper limits. The novelty of our work is illustrated by the difference between Figure \ref{Quiescent00} and \ref{Fix}, that results from the careful and detailed calculation of the lepton distribution. Starting from similar conditions as in the quiescent state of Figure \ref{Quiescent00}, Figure \ref{Flare00c} shows a spectrum for the flaring state of Sgr A*, together with the lepton distribution. The spectrum is dominated by synchrotron self-Compton emission (red line), even-though the non-thermal synchrotron (blue line) has a non negligible contribution to the total spectrum. The emitting region is the same as in the quiescent state with the same density of particles. However the magnetic field has dropped from 154.3 to 48.8 Gauss, the plasma being now kinetically dominated with $\epsilon _k/\epsilon _b=6.3$. This dramatic change could be interpreted as being due to magnetic reconnection, a physical process that could be at the origin of the flaring event. When the magnetic field is rearranged, some magnetic energy is converted to kinetic energy, thermal energy, and particle acceleration. In this way we have a decrease of the magnetic field strength and an increase of the two parameters $\rm l_{th}$ and $\rm l_{nth}$ representing the thermal and non-thermal acceleration respectively. In this case, to have the Compton dominated spectrum in the flaring state (Figure \ref{Flare00c}), the thermal heating increases from $9.6\times 10^{35}$ erg $\rm s^{-1}$ to $3.4\times 10^{36}$ erg $\rm s^{-1}$, and the non-thermal one from $4.8\times 10^{35}$ erg $\rm s^{-1}$ to $1.4\times 10^{36}$ erg $\rm s^{-1}$. The non-thermal component has also a much flatter distribution in the flaring state, meaning that the high energies are more populated, while $s$ is steeper during quiescence. With this model, during the flare, the total luminosity reaches $4.8 \times 10^{36}$ erg $\rm s^{-1}$. \begin{figure} \includegraphics[scale=0.35]{FlareFit00c_bis} \includegraphics[scale=0.35]{FlareLepton00c} \caption{\textit{Flare spectrum from Sgr A* (top panel) and the associated lepton distribution (bottom panel) in a closed system configuration. The data points on the spectrum are described in the beginning of section 3. The blue curve corresponds to the synchrotron process, while the red corresponds to the Compton processes. The Bremsstrahlung contribution is too small to be visible on this scale. On the lepton distribution, the full line corresponds to the actual calculated distribution, while the dotted line is a standard Maxwellian plus power law as a comparison. In all our models (except for Figure \ref{Fix}), the spectra result from the calculated particle distribution (full line), while the theoretical fixed distribution (dotted line) is just shown as an illustration.}} \label{Flare00c} \end{figure} Figure \ref{Flare00s} shows another potential flare model, together with its lepton distribution. This spectrum is similar to Figure \ref{Flare00c} except that the non-thermal synchrotron component is more important than the Compton one. The radius of the emitting region is the same as well as the density of particles. The magnetic field magnitude is also 48.8 Gauss. The difference comes from the balance between thermal versus non-thermal heating. For this non-thermal synchrotron dominated spectrum, the non-thermal contribution $l_{nth}$ is more important than previously with a value of $1.9\times 10^{36}$ erg $\rm s^{-1}$ and a slope of 2.2. The total luminosity is similar to the previous case with L=$4.3 \times 10^{36}$ erg $\rm s^{-1}$. \begin{figure} \includegraphics[scale=0.35]{FlareFit00s_bis} \includegraphics[scale=0.35]{FlareLepton00s} \caption{\textit{Flare spectrum from Sgr A* (top panel) and the associated lepton distribution (bottom panel) in a closed system configuration. The data points on the spectrum are described in the beginning of section 3. The blue curve corresponds to the synchrotron processes, while the red corresponds to the Compton processes. It is the same as Figure \ref{Flare00c}, but for the case of a dominant synchrotron component with respect to the Compton one. On the electron distribution, the full line is the shape of the actual distribution from which the spectrum comes from, while the dotted lines are pure Maxwellian plus power-law components as a comparison.}} \label{Flare00s} \end{figure} The quiescent state is very well reproduced by this closed region configuration model, the sub-millimetre bump is clearly fitted by synchrotron emission which extends to the lower part of the variable MIR and NIR data, and we are not violating the X-ray limit represented by the black ``bowtie''. According to the new results from \citet{wang13} and \citet{neilsen13} saying that the inner region is dominated by non-thermal emission from combined weak flares that can contributes to 10$\%$ of the quiescent X-ray, we could be even too low in the X-ray luminosity (lower than 10$\%$ of the observed flux). The flaring spectra are somewhat more marginal because the sub-millimetre contribution is too high compared to two black data points that are upper limits and end up below the spectra. On the other hand, the sub-mm part of the spectrum is also variable on the order of 20$\%$ and considering that we don't have perfect simultaneous data, it still provide a close enough interpretation, meaning that we are still within 20$\%$ of the actual data point values. In the MIR, the flaring spectra shown in Figures \ref{Flare00c} and \ref{Flare00s} are in the right range of luminosity, and we can also reproduce the X-ray flare fluxes. The NuSTAR (blue) and Chandra (orange) flare slopes are respected, while the trend of one of the MIR flare (green ``bowtie'') is not well reproduced at all. We have to keep in mind that our data are not simultaneous and slopes in the MIR have been observed only few times, but still in this case our slope seems to be in contradiction with this observation. Comparing models with observations, the ``$\alpha$'' prescription from \citet{shakura73} is still used to parametrize turbulence and quantify the angular momentum loss mechanism, and the process whereby gravitational binding energy is converted into radiation. The best physical interpretation of this $\alpha$ parameter is given by the mechanism of magneto-rotational instability (MRI). For instance, \citet{hawley95} have performed three-dimensional magneto-hydrodynamic numerical simulations of an accretion disk to study the nonlinear development of the MRI, they obtained that the time average of $\alpha$ is 0.6 for the vertical field runs. In the study of advection-dominated accretion and black hole event horizon, \citet{narayan08} argued that for an advection-dominated accretion flow, the theoretically expected value of $\alpha$ is 0.1-0.3. We have the estimated numbers for $\alpha$ for the different cases studied here. Assuming Keplerian assumptions, which is obviously a very simplistic approximation close to the black hole but allows us to check roughly that the orders of magnitude are not inconsistent with the first order $\alpha$ approximation, the viscous heating $Q$ is related to the ``$\alpha$'' parameter by: \begin{equation} Q=\frac{3}{2}\alpha P \left(\frac{GM}{R^3}\right)^{1/2} \end{equation} where P is the pressure. Using the dimensionless constants from our model, the viscous parameter is derived from the following formula: \begin{equation} \alpha =\frac{1}{2}\frac{l_{th} \ r^{1/2}}{\tau \ \Theta _e} \end{equation} with r=2$r_G$, $\tau = n_e \sigma _T R$, $\Theta _e=kT_e/m_ec^2$. We find that the viscosity parameters $\alpha$ resulting from models \ref{Quiescent00}, \ref{Flare00c}, and \ref{Flare00s} are 0.18, 0.18 and 0.16 respectively. These values are in good agreement with the theoretical predictions described above which illustrates that an anomalous thermal heating is common on the context of accretion disks. \subsection{Plasma with escape and thermal injection} For comparison within the open configuration, we want to reproduce the quiescent spectrum with the assumption that particles flows in and out of the emitting region. Figure \ref{Quiescent33} shows such a spectrum, which is similar to the simulated spectrum in Figure \ref{Quiescent00}. This spectrum is a realistic and acceptable solution for the quiescent state of Sgr A*. The magnetic field magnitude is 175 Gauss with a resulting plasma density of $3.3\times 10^{6} cm^{-1}$ that is also magnetically dominated with $\epsilon _k/\epsilon _b=0.08$. We note that in this case, the thermal component does not really differ from the pure Maxwellian distribution (see bottom panel of Figure \ref{Quiescent33}). Indeed, in this range, radiative cooling is negligible, so that the balance between the thermal particle injection and the uniform particle escape produces a steady state distribution that is almost the pure Maxwellian. The total luminosity of this spectrum is L = $1.0 \times 10^{36}$ erg $\rm s^{-1}$, also equivalent to the previous quiescent fit. From this quiescent spectrum we next investigate the changes necessary in order to move to the flaring spectrum. \begin{figure} \includegraphics[scale=0.35]{QuiescentFit33_bis} \includegraphics[scale=0.35]{QuiescentLepton33_bis} \caption{\textit{Quiescent spectrum from Sgr A* (top panel) and the associated lepton distribution (bottom panel) with thermal injection and escape (open configuration). The data points are the same as in the previous quiescent spectrum on Figure \ref{Quiescent00}}} \label{Quiescent33} \end{figure} \begin{figure} \includegraphics[scale=0.35]{FlareFit33s_J21} \includegraphics[scale=0.35]{FlareLepton33s_J21} \caption{\textit{Flare spectrum from Sgr A* (top panel) and the associated lepton distribution (bottom panel) in the open configuration. The data points are the same as in the previous spectrum on Figure \ref{Flare00c}. The electron distribution shows also in dotted line, the pure Maxwellian and power-law curves as a comparison. }} \label{Flare33s_J21} \end{figure} \begin{figure} \includegraphics[scale=0.35]{FlareFit33s_O17} \includegraphics[scale=0.35]{FlareLepton33s_O17} \caption{\textit{Flare spectrum from Sgr A* (top panel) and the associated lepton distribution (bottom panel) with thermal injection and escape. The data points are the same as in the previous flare spectra on Figures \ref{Flare00c}, \ref{Flare00s}, and \ref{Flare33s_J21}. The calculated electron distribution (full line) and the theoretical one (dotted line) as a comparison.}} \label{Flare33s_O17} \end{figure} Figure \ref{Flare33s_J21} shows a spectrum for a flaring state of Sgr A*, together with the lepton distribution. The flaring spectrum is dominated by non-thermal synchrotron and reproduce the NuSTAR July flare as well as an IR flare with a slope closer to the usually observed one (flat to slightly rising in the power spectrum). As expected, cooling breaks are observed in the lepton distribution and in the photon spectrum. These are not sharp but span at least one order of magnitude in frequency. The emitting region is the same as in the quiescent state and the magnetic field stays the same as well. The amount of injected particles is the same as in quiescence leading to a constant density. The only change in order to move from the quiescent to the flare spectrum is on the non-thermal component: the heating parameter $\rm l_{nth}$ is increasing by almost one order of magnitude, and the slope becomes flatter (from 3.6 to 2.3 during the flare) meaning that we have more particles in the higher energy part of the electron distribution. So, we must have some physical processes that accelerates the particles more efficiently in the flaring state and creates a harder non-thermal distribution. As a consequence the total luminosity increases, reaching $3.8 \times 10^{36} \rm erg \ s^{-1}$. We can do the same exercise to reproduce the October NuSTAR flare. This is shown in Figure \ref{Flare33s_O17} for our best case scenario that is really similar to the model on Figure \ref{Flare33s_J21} with non thermal synchrotron with a cooling break responsible for the flare emission. This is not surprising as we explained earlier in the Data section 3.1, that both flares are not significantly different and should be modelled with a power-law shape as a fit. As for the July flare, the trigger of the event is on the non-thermal component of the lepton distribution that increases by a bit more than an order of magnitude with a prescribed slope of 2.1 which is a bit flatter than for the July flare. Beside that, the size of the emitting region, the magnetic field, and the density are the same as for the other flare and the same as in quiescence. For the October flare model we have a slightly higher non-thermal power with a slightly flatter prescription for the acceleration, the total luminosity of this flare spectrum is $4.8 \times 10^{36} \rm erg \ s^{-1}$ \begin{figure} \includegraphics[scale=0.35]{FlareFit33c} \includegraphics[scale=0.35]{FlareLepton33c} \caption{\textit{Flare spectrum from Sgr A* (top panel) and the associated lepton distribution (bottom panel) with thermal injection and escape. The data points are the same as in the previous flare spectra on Figures \ref{Flare00c}, \ref{Flare00s}, \ref{Flare33s_J21}, and \ref{Flare33s_O17}. The calculated electron distribution (full line) and the theoretical one (dotted line) as a comparison.}} \label{Flare33c} \end{figure} We investigated an alternative scenario where the X-ray flares would be produced by synchrotron self Compton (SSC) emission, however we found that models that account for pure SSC as an emission mechanism have physical parameters that are hardly compatible with what we know of the central region density. Moreover it leads to a more complex scenario where the large scale magnetic field and the density of the medium need to be modified during the flare event. Nevertheless, inverse Compton emission could still be a non-negligible component of the overall spectrum, especially at high energies, assuming a weaker magnetic field during the flare and a higher density medium. Figure \ref{Flare33c} gives an illustration of some ``power-law'' shape X-ray emission that would be a combination of synchrotron and SSC. In this case the density has increased from $3 \times 10^{6}$ to $1 \times 10^{8}$ $\rm cm^{-1}$ and the magnetic field has dropped from 175 to 35 Gauss moving to a kinetic dominated flow with $\epsilon_k/\epsilon_b=97$. We think that the best model for the flaring state of Sgr A* is the one produced by non-thermal synchrotron with a cooling break as seen on Figure \ref{Flare33s_J21} and \ref{Flare33s_O17} because the trends of the multi-wavelength data are reproduced and only very few parameters need to be adjusted in order to move from the quiescent to the flaring state. This is especially true if we consider that the green ``bowtie'' is a typical IR slope. The non-thermal synchrotron emission is a simple and elegant solution of the flaring event observed by Chandra and NuSTAR because the overall state of the medium does not change dramatically (for instance the density and magnetic field is kept constant). The acceleration of the electrons leading to the more important and flatter non-thermal lepton distribution is the only modification, and this could be triggered by some plasma instabilities that are not modelled in details here. Even-though magnetic reconnections could also be the initial trigger, it can happen on very small scales and does not necessary lead to a drop of the global magnetic field magnitude. A possible sudden increase of the density (as in model \ref{Flare33c}) can be interpreted as an accretion rate fluctuation, however such fluctuations of more than an order of magnitude are most likely not happening every day in the Galactic Center and would be difficult to interpret. The NuSTAR data being consistent with a power-law shape to higher energies points also in favour of the synchrotron scenarios as in models \ref{Flare33s_J21} and \ref{Flare33s_O17}. The sub-millimeter part of the spectrum is really stable: comparing the quiescent state on Figure \ref{Quiescent33} with the sub-millimeter part of the spectrum on Figure \ref{Flare33s_J21} we have exactly the same contribution around $10^{12}$ Hz. This is mainly due to the fact that the magnetic field is kept constant and the injected population is also constant. This configuration gives us in return a constant density. We have to keep in mind that the data are not simultaneous, nevertheless it is an interesting exercise trying to model several wavelength observations in the same time. In the future simultaneous observations are going to be very important for this kind of multi-wavelength study. \begin{figure} \includegraphics[scale=0.26]{LightCurve} \caption{\textit{NuSTAR X-ray light curve of the flare event of the 21st of July 2012. The black data points is the unabsorbed flux between 3 and 79 keV. When the detection was not strong enough to be significant, we have only plotted upper limits of three sigma (arrows). See \citet{barriere14} for the observed light-curve. The red curve represents the X-ray light curve from our quiescent spectrum model in Figure \ref{Quiescent33} to the flare spectrum model in Figure \ref{Flare33s_J21}.}} \label{LightCurve} \end{figure} We then looked at the time evolution between Figure \ref{Quiescent33} and \ref{Flare33s_J21} to reproduce the X-ray light curve of the July NuSTAR flare. Using our self-consistent calculations, we can also model the time-dependent particle evolution in order to reproduce the flare light-curves. This approach has also been considered by \cite{doddseden10} in the one zone cell approximation but for a given power-law distribution. The cooling time-scales being very short compared to the flare duration, the time evolution is entirely governed by the physics of the acceleration processes that are not clearly defined. Figure \ref{LightCurve} shows the reproduction of the X-ray light curve between 3 and 79 keV for the same parameter setting as for Figure \ref{Quiescent33} and \ref{Flare33s_J21}. During the flare, the non-thermal parameter $\rm l_{nth}$ evolves linearly with time from the quiescent value $10^{-5}$ to a maximum value, such that the averaged value over the flare duration is $9.8 \times 10^{-5}$ as in our flare spectrum \ref{Flare33s_J21}. It reaches a maximum value at the peak, and decreases back immediately with a linear dependence. The slope of the accelerated particles is set to 2.28 during the flare event as in our flare spectrum \ref{Flare33s_J21}, and to 3.60 in quiescence. We note that before the flare event, Sgr A* is not detected by the X-ray satellite NuSTAR because it is too faint, and embedded in the diffuse/unresolved emission, in this case we simply plotted 3 $\sigma$ upper limits. \section{Conclusion and Outlook} We are able to reproduce the quiescent spectrum of Sgr A* in two different scenarios: considering that the accretion process is very slow and that the same particles remain a long period of time in the emitting region, and considering that the accretion process is very efficient, with and accretion velocity close to the speed of light; so that particles only remain in the emitting region on short time scales comparable to the radiative time scales. To model the flaring state however, we favour the second scenario, that allows a better interpretation of the sub-millimetre and infra-red part of the spectrum. The flaring state spectrum is best reproduced by a plasma that has the same low magnetic field as in quiescent, and the same amount of injected particles. More efficient non-thermal heating processes are responsible for the flaring event, and a flatter non-thermal distribution of electrons is present. Besides this change, all other parameters stay the same when moving from the quiescent to the flaring spectrum (Figure \ref{Quiescent33} and \ref{Flare33s_J21}). Our conclusions are in good agreement with \citep{doddseden10} who also favoured non-thermal synchrotron processes and a cooling break in order to explain the observed IR and X-ray flares. However, in our study we do not make the hypothesis of magnetic reconnection as an energy power for the flares, and our conclusions do not favour this particular hypothesis. As in our best case scenario (Figures \ref{Quiescent33} and \ref{Flare33s_J21} or \ref{Flare33s_O17}), the magnetic field is not required to drop significantly. An important drop in the magnetic field amplitude has also important consequences on the sub-millimetre and thermal part of the spectrum that we also model here, other parameters have then to be carefully adjusted in order to maintain the sub-mm shape in reasonable values, so we think other acceleration mechanisms are more likely to be happening. Reconnection mechanisms could also occur in very localised regions, and particles would diffuse away from the reconnection sites and radiate in a field which has not reconnected, so we would not notice any significant global drop of the magnetic field amplitude. \begin{figure} \includegraphics[scale=0.35]{Quiescent_x3} \includegraphics[scale=0.35]{QuiescentLepton33_x3} \caption{\textit{Quiescent spectrum from Sgr A* with the same conditions as in Figure \ref{Quiescent33} but assuming an increase of the density by a factor three. All the observed data (of quiescent and flaring state) has been kept on the figure. The associated lepton distribution is shown on the bottom panel.}} \label{Quiescent33_x3} \end{figure} In our study, we end up with a plasma density of $3.3\times 10^{6}$ particles per cubic centimetre, which is a reasonable value according to observations and theoretical work. But what would happen to the quiescent spectrum (Figure \ref{Quiescent00} or \ref{Quiescent33}) if the density increases by a factor three as expected to happen now when the cloud G2 is falling into the Galactic Center? As reported by \citet{gillessen12}, a dense gas cloud approximately three times the mass of Earth is falling into the accretion zone of Sgr A*, but nothing noticeable has been observed yet from Sgr A*. Figure \ref{Quiescent33_x3} represents such a prediction, it has exactly the same settings as the model described on Figure \ref{Quiescent33} but the density is three time higher (we have more particle injection). The model predicts a flux increase in the sub-mm bump ($10^{12}$ -- $10^{13}$ Hz), however the current emission is not well constrained in this band. If the source stays in quiescence, we do not expect a particular increase in the IR, and we have some emission in the ultra-violet due to the first Compton component that is unfortunately not detectable. Even in the X-ray, if the increasing density by a factor three does not trigger a flare event, we do not expect a significant increase from the quiescent X-ray level. Overall it could well be that we are not detecting any striking changes. \section*{Acknowledgments} We acknowledge support from The European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement number ITN 215212 Black Hole Universe. We also acknowledge support from the ``Nederlandse Onderzoekschool Voor Astronomie'' NOVA Network-3 under NOVA budget number R.2320.0086.\\ SM and SD gratefully acknowledge support from a Netherlands Organization for Scientific Research (NWO) Vidi Fellowship.\\ RB and JM acknowledge financial support from both the french National Research Agency (CHAOS project ANR-12-BS05-0009) and the Programme National Hautes Energies.
{ "timestamp": "2015-04-15T02:07:37", "yymm": "1504", "arxiv_id": "1504.03495", "language": "en", "url": "https://arxiv.org/abs/1504.03495" }
\section*{Introduction} Over the past few decades the particle production in relativistic heavy-ion collisions has been successfully described by statistical hadronization models which postulate a chemical freeze-out surface common to all hadronic species. This means the yield of particles emitted from the interaction region can be reproduced by assuming an equilibrated system with fixed thermodynamic parameters. At this time in the dynamical evolution of the expanding fireball, the production of particles through inelastic collisions ceases and freeze-out of the hadro-chemistry is achieved. The resulting phase space surface is generally characterized (at sufficiently high collision energies) by only two free parameters, namely the chemical freeze-out temperature and the associated baryo-chemical potential ($T_{ch}$ and $\mu_{B,ch}$). These types of static models have proven to be extremely successful over a wide range of collision energies, which led to the definition of a generally accepted chemical freeze-out curve in the $T-\mu_{B}$ plane~\cite{Cleymans:2005xv}. It was also shown that this curve is close to the QCD transition curve as defined by lattice QCD~\cite{Tctrilogy}. Since the QCD transition is an analytic crossover for small $\mu_{B}$~\cite{Aoki:2006we}, the lattice curve corresponds to the steepest gradient in the temperature dependence of some characteristic observables which identify the deconfinement and chiral restoration transition~\cite{Tctrilogy,Bazavov:2011nk}. In the past, statistical model calculations were used only to describe the multiplicity of particles, either by calculating the expected yields (which adds the interaction volume as an additional free parameter to the calculation) or by calculating particle ratios (in which case the volume cancels out, assuming a common, flavor independent freeze-out volume)~\cite{Cleymans:2005xv,BraunMunzinger:2003zd,Becattini:2003wp,Becattini:2005xt,Andronic:2011yq,Sagun:2014sya}. Recently, it has been shown that the moments of net-particle multiplicity distributions from the experiment can be related to susceptibilities of conserved charges calculated on the lattice~\cite{Karsch:2012wm,Bazavov:2012vg,Borsanyi:2013hza,Borsanyi:2014ewa}. This allows the direct determination of chemical freeze-out parameters in the thermally equilibrated grand-canonical ensemble approach on the lattice without having to rely on statistical models. The continued application of statistical models in the context of these lattice QCD-to-experiment comparisons, is based on the fact that conserved charges on the lattice can only be directly related to particle distribution moment measurements by studying and controlling the limitations of the experiment to measure conserved charges. Thus, by modeling detector effects, such as acceptance and reconstruction efficiency, in a statistical model calculation, the latter de-facto facilitates the necessary link between experiment and lattice QCD. It was shown that statistical Hadron Resonance Gas (HRG) models reproduce the equilibrium lattice QCD results for the lowest order susceptibilities and their ratios in the hadronic phase reasonably well~\cite{Borsanyi:2013hza,Borsanyi:2014ewa,Mukherjee:2013lsa}. An added advantage of the HRG model calculations is the fact that a conserved charge, such as strangeness, can be broken down into its contributions from each identified particle species, which is not achievable on the lattice. We have shown previously that the calculations of the lowest moments of particle multiplicity distributions in a statistical model are in agreement with recent STAR data~\cite{Alba:2014eba}. One can therefore ask whether the chemical freeze-out parameters obtained from our analysis are comparable to parameters from an analysis of the particle multiplicities. For this purpose, we study here the sensitivity of higher-order moment ratios on the freeze-out parameters in comparison to that of multiplicity ratios. Depending on the observable and the experimentally achievable accuracy in the corresponding measurement, it might be advantageous to consider fluctuations rather than yields for pinning down the chemical freeze-out conditions. This requires that the measured fluctuations are those of an equilibrated hadronic medium~\cite{Kitazawa:2013bta,Sakaida:2014pya}. From the experimental data the central moments of the (net-)particle distributions can be constructed. The corresponding cumulants are given by: {\footnotesize \begin{eqnarray} c_1&=&\langle (N)\rangle~~~~~~~~~~~~~~~~~~c_2=\langle\left(\delta N\right)^2\rangle \nonumber \\ && \nonumber \\ c_3&=&\langle\left(\delta N\right)^3\rangle~~~~~~~~~~~~~~~~c_4=\langle\left(\delta N\right)^4\rangle- 3\langle\left(\delta N\right)^2\rangle^2 \nonumber \end{eqnarray}} where $\delta N=N-\langle N\rangle$ is the fluctuation of the (net-)particle multiplicity around its mean. The cumulants $c_i$ relate directly to the susceptibilities $\chi_i$, which are quantities that can be calculated for thermodynamic systems, e.g.~in lattice QCD. Susceptibilities are defined as derivatives of the pressure with respect to the chemical potential. In order to quantify certain features of distributions beyond the mean ($M=\chi_1$) and the variance ($\sigma^2=\chi_2$) one often looks at the skewness $S$ and the kurtosis $\kappa$ defined as: {\footnotesize \begin{eqnarray} \mathrm{ mean:}~M=\chi_1~&&~\mathrm{ variance:}~\sigma^2=\chi_2 \nonumber \\ && \nonumber \\ \mathrm{ skewness:}~S=\chi_3/\chi_{2}^{3/2}~&&~\mathrm{kurtosis:}~\kappa=\chi_4/\chi_{2}^{2} \nonumber \end{eqnarray}} Then, one can relate the thermodynamic susceptibilities calculated on the lattice or in an HRG model to quantities obtained experimentally through the following volume-independent ratios: {\footnotesize \begin{eqnarray} S\sigma=\chi_3/\chi_{2}~~~~~~&&~~~~~~\kappa\sigma^2=\chi_4/\chi_{2} \nonumber\\ && \nonumber\\ M/\sigma^2=\chi_1/\chi_{2}~~~~~~&&~~~~~~S\sigma^3/M=\chi_3/\chi_{1}. \nonumber \end{eqnarray}} \section*{Details of the Hadron Resonance Gas model} Detailed measurements of conserved charge fluctuations have been conducted at several collision energies at RHIC in order to search for non-statistical fluctuations that could signal the existence of a critical point in the QCD phase diagram~\cite{Adamczyk:2013dal,charge,McDonald:2012ts}. A series of studies using variations of the standard statistical hadronization models to determine the baseline for these critical point searches can be found in the literature ~\cite{Begun:2006jf,Karsch:2010ck,Fu,Garg:2013ata,Nahrgang:2014fza}. The results presented in this paper are obtained using an HRG model in partial chemical equilibrium, which means that all contributions from the strong decay of hadronic resonances are taken into account. The list of all included resonant states is based on the PDG tables \cite{PDG12}, up to a mass of 2 GeV/c$^{2}$. Further details of this HRG model can be found in~\cite{Alba:2014eba}, where our group compared the net-charge and net-proton distribution measurements from the STAR collaboration to the HRG model results. We evaluated ratios of the lower moments of these multiplicity distributions as functions of the temperature $T$, baryo-chemical potential $\mu_B$ and the chemical potentials $\mu_Q$ and $\mu_S$. The relation among these thermodynamic variables is obtained by imposing certain initial conditions occurring at the collision, namely the conservation of the total net-strangeness $n_S=0$ and the proper ratio of protons to baryons in the colliding nuclei $n_Q=\tfrac{Z}{A}n_B$. We obtained lower freeze-out temperatures ($\approx$146 MeV) than the statistical hadronization model fits to all available particle multiplicities ($\approx$166 MeV). We attributed this discrepancy in part to the fact that net-proton and net-charge are mainly sensitive to the light quark susceptibilities, whereas a full fit to all particle multiplicities contains a significant contribution from (multi-)strange particles. A possible separation of light and strange quark transitions was suggested by high precision lattice QCD simulations of the kurtosis ~\cite{Bellwied:2013cta}. If true, then strange particle multiplicities and moments might yield a higher freeze-out temperature, as already pointed out in~\cite{Alba:2013haa,Bluhm:2014lva}. A comparison of our freeze-out temperature obtained from the net-proton and net-charge distribution to the one from measured particle yields (see Ref. \cite{Alba:2014eba}), indeed indicates a discrepancy between the temperature necessary to describe the strange and multi-strange baryon yields compared to the lower mass and mostly light quark states ($\pi,\,K,\,p$). In this fit to the particle multiplicities pions and kaons, though, show very little sensitivity to the chosen freeze-out temperature at fixed chemical potential. Since pions, kaons and protons are presently the only particles for which the experiments can determine higher order fluctuations, it is interesting to ask whether, for these hadronic final states, the moments of the net distributions are more sensitive to the chemical freeze-out conditions than the basic particle ratios. Our initial study has also shown that the highest cumulants might be prone to non-statistical effects such as volume fluctuations or chiral criticality, even at collision energies in the crossover region~\cite{Friman:2011pf}. Critical behavior might be captured in lattice QCD, but statistical models are by definition insensitive to these dynamical fluctuations. Therefore we initially focus our study on the lowest moments necessary to determine freeze-out parameters, namely $\chi_2$ and $\chi_1$. Whether $\chi_3$ already contains a small contribution from critical fluctuations is an open issue, which is presently debated in the literature. For very high collision energies, well separated from a potential critical point, these effects should be small. Thus, for the highest RHIC energies and all LHC energies it should be safe to also use $\chi_3$ or even $\chi_4$ for a potential determination of the chemical freeze-out parameters, and we show results also for moment ratios that include the higher moments. \section*{Results} Based on the above mentioned tension in the extracted chemical freeze-out temperatures using the STAR net-proton and net-charge fluctuation results compared to the particle ratios, in particular at small baryo-chemical potentials~\cite{Alba:2014eba}, we perform the comparative study presented here only for the highest RHIC energy. Since there is good agreement between the baryo-chemical potentials deduced from the particle ratios and the fluctuations, we fixed $\mu_B$ at 24.3 MeV and determined the sensitivity of different observables to the freeze-out temperature. Historically, ratios of mesons and baryons to pions have proven to be a good thermometer, since their dependence on the temperature $T_{ch}$ is mainly driven by the particle mass difference. Particle yields might have a higher sensitivity to the temperature but a fit to ratios is preferred, since the volume cancels out and the ratios are less prone to biases \cite{Andronic:2005yp}. Our results for the sensitivity of particle ratios to the temperature are in good agreement with previous statistical model calculations \cite{Magestro:2001jz,Andronic:2005yp}. Here we add the sensitivity of the moments to this study, and we show that for certain particles it might be beneficial to use the moment analysis, if efficiency corrected data are available. A comparison of the freeze-out temperature sensitivity of the particle ratio (with respect to the pion yield) and the lowest moment ratio ($\chi_2$/$\chi_1$) for all main hadron species ($K, p, \Lambda$, $\Xi$, $\Omega$) is shown in Fig.~\ref{fig2}. In particular for kaons the difference is striking: the $K/\pi$ ratio is almost completely insensitive to the temperature, whereas the net-kaon moment ratio shows a strong sensitivity well outside the achievable experimental error bars. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{fig1.pdf} \caption[]{\label{fig2}(Color online) Comparison of the sensitivity of $\chi_2$/$\chi_1$ ratios and particle yield ratios to the chemical freeze-out temperature. All values are normalized to the value at 140 MeV. The baryo-chemical potential has been fixed to 24.3 MeV. The vertical bars in the plots show the presently achievable experimental uncertainties for particle species for which both particle yield and moment ratios are available (based on STAR measurements~\cite{Adamczyk:2013dal,McDonald:2012ts,Adams:2006ke,Abelev:2008ab}).} \end{figure} The experimental error bars, shown as vertical thick lines in Fig.~$1$, are from~\cite{Abelev:2008ab} for the particle ratios, the net-proton $\chi_2/\chi_1$ uncertainty is from the recent STAR publication~\cite{Adamczyk:2013dal}. For the net-Kaon $\chi_2/\chi_1$ error we have used a preliminary study by STAR~\cite{McDonald:2012ts} based on moments that were not efficiency corrected. The effect of the reconstruction efficiency correction was estimated to roughly double the uncertainty~\cite{McDonald:private} and the error bar shown in Fig.~$1$ reflects the anticipated error for an efficiency corrected measurement. In general, we show experimental uncertainties only for particle species for which both measurements (the yield ratios and the net-moment ratios) have been published. It is interesting to note that for protons, which require a lower temperature for the yields and moments at the LHC and the higher RHIC energies, both measurements show the necessary sensitivity based on the experimentally achievable error bars. The lower temperature deduced for protons from lattice QCD and HRG model moment analyses~\cite{Borsanyi:2014ewa,Alba:2014eba}, might thus explain the 'proton anomaly' in the yield. Generally, the baryon yield ratios show a stronger sensitivity to the temperature than the corresponding lower moment ratios; nevertheless, for multi-strange hyperons, both particle yield and moment ratio exhibit a comparable steepness in the relevant temperature range. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{fig2.pdf} \caption[]{\label{fig1} (Color online) Net-proton $\chi_2/\chi_1$ as a function of the temperature: comparison between the curves obtained with different kinematic cuts and with/without isospin randomization. All values are normalized to the proper value at 140 MeV.} \end{figure} The moment ratios which we show in Fig.~1 include kinematic cuts in rapidity and transverse momentum according to the experimental restrictions as well as contributions from resonance decays. For the latter, two effects have to be taken into account. First, a resonance can decay and be regenerated, which leads, for sufficiently short-lived resonances, to an isospin randomization for the daughter hadrons. This effect is for example relevant for the net-protons since the $\Delta$-resonances have a very short lifetime. Kitazawa and Asakawa ~\cite{Kitazawa:2011wh,Kitazawa:2012at} have proposed corrections, which are applied to the net-proton moment ratio in Fig.~1 (see also~\cite{Nahrgang:2014fza}). The magnitude of the kinematic cuts and the isospin randomization effect on the slope of $\chi_2/\chi_1$ is shown in Fig.~2 for net-protons: while different cuts on momentum and rapidity have a very small effect~\cite{Garg:2013ata}, the corrections due to isospin randomization (KA corrections) visibly increase the sensitivity of this curve to the temperature. The net-proton $\chi_2/\chi_1$ shown in Fig.~1 corresponds to the red, dotted line in Fig.~2, for which all possible effects are taken into account. The second relevant effect is due to the probabilistic nature of the resonance decay, because the number of ground state hadrons follows a probability distribution due to the different branching options. This was first pointed out in~\cite{Begun:2006jf} and can be applied to the susceptibilities $\chi_i$ with $i>1$ by taking into account the actual decay contributions rather than an averaged branching pattern~\cite{Fu}. We apply this probabilistic effect to all particles investigated in Fig.~1 except the (anti-)protons as here isospin randomization dominates the probabilistic resonance decay contributions. Fig.~\ref{fig1bis} shows the net-kaon and net-$\Lambda$ $\chi_2/\chi_1$ with and without the probabilistic decay contribution (PROB). As in the case of the isospin corrections, the temperature sensitivity changes when the effect is taken into account. It is interesting to note, though, that the relative temperature sensitivity for net-kaon fluctuations reduces, whereas for net-$\Lambda$s the sensitivity is enhanced. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{fig2bis.pdf} \caption[]{\label{fig1bis} (Color online) Net-kaon and net-$\Lambda$ $\chi_2/\chi_1$ as a function of the temperature: comparison between the curves with/without the probabilistic decay contribution (PROB). All values are normalized to the proper value at 140 MeV.} \end{figure} The upper panel of Fig.\ref{fig3} shows a comparison of the net-$\chi_2$/$\chi_1$ temperature sensitivity for all relevant stable hadrons that are measurable by the experiments. A noticeable baryon-to-meson difference can be established. This difference can be understood qualitatively already for primordial hadrons (without resonance decays) in the Boltzmann approximation, where the net-cumulant ratio $\chi_2/\chi_1$ is given by $\coth(\mu_i/T)$, with $\mu_i$ being the chemical potential of particle $i$~\cite{Karsch:2010ck}. The ratio $\tfrac{\mu_i}{T}$ has a different trend for mesons and baryons, as can be seen in the lower panel of Fig.~\ref{fig3} (note that the normalized difference from the value at $T=140$~MeV is shown), because the individual components that contribute to $\mu_i$, i.e. $\mu_B,\mu_Q,\mu_S$, sum up to an opposite temperature dependence for baryons and mesons. Let us point out that, due to resonances which can decay into a meson and/or its anti-meson, the susceptibilities for net-mesons cannot simply be obtained via independent production. For the second susceptibility this is seen from: \begin{align}\label{eq1} \mathlarger \chi_{2,net-i} = & \mathlarger\chi_{2} ^i+\mathlarger\chi_{2} ^{\bar{i}}+ \nonumber \\ & \sum _R (\langle \mathlarger n_i\rangle _R ^2+\langle \mathlarger n_{\bar{i}}\rangle _R ^2 -2\langle \mathlarger n_i\rangle _R\langle \mathlarger n_{\bar{i}}\rangle _R)\mathlarger\chi_2 ^R . \end{align} The first two terms, $\chi_2 ^{i,\bar{i}}$, represent the contributions from the primordial particles, while the sum in Eq.~(\ref{eq1}) accounts for the average resonance decay contributions. The last term in Eq. (\ref{eq1}) is non-zero whenever a resonance decays into a stable particle, its antiparticle or the particle-antiparticle pair. Since there are no known hadronic resonances that decay into a baryon and/or its anti-baryon, Eq.~(\ref{eq1}) reduces to independent production in the case of baryons, while correlations arise for mesons. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{fig3bis.pdf} \caption[]{\label{fig3}(Color online) Upper panel: comparison of the sensitivity of $\tfrac{\chi_2}{\chi_1}$ ratios to the chemical freeze-out temperature for different hadron species. Lower panel: $\Delta (\tfrac{\mu_i(T)}{T})$ for particle $i$ as a function of the temperature for mesons and baryons. All values are normalized to the proper value at 140 MeV and the baryo-chemical potential has been fixed to 24.3 MeV.} \end{figure} It has been established, on the basis of simple particle yields, that in a statistical model the strange baryons are more sensitive to the temperature since the strange quark rest mass is close to the temperature of the equilibrated system. As shown in Fig. \ref{fig3}, the steepness of the $\chi_2/\chi_1$ curves, and thus the sensitivity to the temperature, increases with the strange quark content, which means that cumulant ratios in the multi-strange baryon sector rise significantly faster than for the $\Lambda$. This effect is mainly caused by imposing vanishing net-strangeness, which means that even at constant $\mu_B$, the strange chemical potential $\mu_S$ has to increase as a function of the temperature (in our simulation from 3 to 8 MeV between 140 and 180 MeV). The impact can already be seen in the temperature dependence of the $\Lambda$ $\chi_2$/$\chi_1$. For the multi-strange baryons this rise grows according to the strange quark content, each quark leading to an additive strangeness chemical potential contribution. In other words, for the net-$\Omega$ at high $T$ the strangeness chemical potential ($\mu_S$ = 3$\times$8 MeV = 24 MeV), offsets the baryo-chemical potential ($\mu_B$ = 24.3 MeV) since the two chemical potentials always carry opposite sign. At this point the net-multiplicity grows much slower than the variance, which leads to a strongly increasing $\chi_2$/$\chi_1$ ratio. This effect is not unique to the specific value of the baryo-chemical potential, in other words at lower and higher collision energies the temperature dependence of the strange baryons will have similar trends to the ones obtained here. The relative increase of $\mu_S(T)$ in our HRG model is comparable to the one deduced from lattice QCD, but the absolute values do not match the dependence from lattice QCD~\cite{Borsanyi:2013hza}, which is about 15\% higher than in a standard HRG model. We note that the splitting between the net-pion and net-kaon results is, likewise, a consequence of the different strangeness content and the imposed strangeness neutrality. Preliminary studies, using all measured resonance states listed in PDG-2014~\cite{Agashe:2014kda} up to a mass of $2.5$ GeV/c$^2$, show that the inclusion of these known higher mass resonances further improves the agreement with the lattice QCD $\mu_S/\mu_B$~\cite{Alba:publish}. This might suggest the existence of even higher mass, as of yet unidentified, strange states, some of which are predicted by quark models~\cite{Bazavov:2014xya, Noronha-Hostler:2014aia}, but any further expansion of the resonance mass spectrum needs to be rigorously tested against all sensitive susceptibility predictions from lattice QCD. We have not included those Hagedorn type states in our calculation. We have verified, though, that the higher $\mu_S$ caused by the additional states listed in PDG-2014 will further increase the temperature dependence of the multi-strange baryon $\chi_2$/$\chi_1$ shown in Fig. \ref{fig3} (by $(10-25)\%$ in the relevant temperature range), whereas the particle yield ratios stay rather unaffected. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{fig4.pdf} \caption[]{\label{fig4}(Color online) Comparison of sensitivity of the net-$\chi_2$/$\chi_1$, $\chi_3$/$\chi_2$ and $\chi_4$/$\chi_2$ ratios to the chemical freeze-out temperature. All values are normalized to the proper value at 140 MeV. The baryo-chemical potential has been fixed to 24.3 MeV.} \end{figure} Fig. \ref{fig4} shows that the sensitivity of the lower cumulant ratios is also preserved in the $\chi_3$/$\chi_2$ ratio, although the sign of the temperature dependence changes. As expected, the kurtosis-based ratio ($\chi_4$/$\chi_2$) shows significantly less sensitivity, but its determination is crucial to obtain the strange quark freeze-out conditions, since in lattice QCD, only the even strangeness susceptibilities yield a finite result. Therefore any link to first principle calculations needs to be established through $\chi_4$/$\chi_2$, which is the lowest even ratio that can be measured. As shown in Fig. \ref{fig4}, each strange particle by itself shows very little temperature dependence, but the more sensitive total net-strangeness $\chi_4$ and $\chi_2$ can be calculated using a combination of all contributing strange particles with pre-factors based on their strangeness content. It should be noted that no PROB corrections were applied to the net-higher moment ratios in Fig.\ref{fig4}. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{fig5a.pdf}\\ \includegraphics[width=0.45\textwidth]{fig5b.pdf} \caption{\label{fig5}(Color online) Upper panel: comparison of $\chi_4^{net-S}/\chi_2^{net-S}$ obtained from different strange particle combinations. The black squares are the lattice QCD results from Ref. \cite{Bellwied:2013cta}. Lower panel: comparison of $\chi_2^{net-S}/\chi_1^{net-S}$ obtained from different strange particle combinations.} \end{figure} In Fig.\ref{fig5} we address the question whether measuring a subset of particle species that carry strangeness is sufficient to relate the experimental results to calculations of the full strange quark content on the lattice. In the upper panel, we show a comparison of $\chi_4^{net-S}/\chi_2^{net-S}$ obtained from different strange particle combinations in our HRG model. It is evident that, in order to achieve a sensitivity to the temperature, which is comparable to the one on the lattice, multi-strange baryons need to be included in the analysis. Obtaining sufficient statistics in the experiment for higher moments becomes increasingly difficult for such baryons. A study purely based on a comparison between experimental data and HRG model predictions can be based on $\chi_2$/$\chi_1$ alone, though, since there is no advantage in using the higher cumulants in terms of sensitivity. The lower panel of Fig. \ref{fig5} shows that, contrary to our higher moment study for $\chi_4/\chi_2$, the sensitivity of $\chi_2/\chi_1$ does not benefit from the inclusion of multi-strange baryons: all curves are parallel, thus yielding a comparable sensitivity to $T$. Given the difficulty of obtaining sufficient statistics for multi-strange baryons, it might therefore be even advantageous to look only at lighter strange hadrons as long as one only wants to compare to statistical model predictions. \section*{Conclusions} In this paper we have shown that, for certain final state hadrons, the statistical analysis of the higher moments of the net-particle distributions might add significant sensitivity to the determination of chemical freeze-out parameters compared to a model that uses particle yields and ratios only. In particular, we found that the temperature sensitivity of the net-kaon $\chi_2/\chi_1$ ratio in comparison with the experimentally achievable accuracy is sufficient to reliably determine $T_{ch}$, quite in contrast to the $K/\pi$ ratio. For protons we found a comparable temperature sensitivity to experimental accuracy relation for both, particle and moment ratios. Our study indicates that it should be possible to obtain a temperature for the chemical freeze-out of strangeness from the efficiency corrected net-kaon $\chi_2/\chi_1$. This might be of particular interest if the strange hadron sector displays a higher $T_{ch}$ than the one extracted from net-proton and net-charge distributions, which are dominated by light quark particles. For a comprehensive study of the chemical freeze-out properties, we suggest that the experiments provide detailed moment analyses for all identified particle species with sufficient event-by-event yields. Efficiency corrected experimental results for the lowest four moments are already available for net-protons from the RHIC beam energy scan. Uncorrected data have also been shown for net-kaons, and we are anticipating corrected net-kaon moment distributions from RHIC and LHC in the near future. \section*{Acknowledgements} This work is supported by the Italian Ministry of Education, Universities and Research under the Firb Research Grant RBFR0814TT, the US Department of Energy grants DE-FG02-07ER4152, DE-FG02-03ER41260 and DE-FG02-05ER41367 and a fellowship within the Postdoc-Program of the German Academic Exchange Service (DAAD).
{ "timestamp": "2015-04-14T02:16:48", "yymm": "1504", "arxiv_id": "1504.03262", "language": "en", "url": "https://arxiv.org/abs/1504.03262" }
\section{\bf Introduction} Bacterial suspensions exhibit remarkable macroscopic properties due to the emergence of self-organization among its components. In particular, interesting effective properties such as enhanced diffusivity, the formation of sustained whorls and jets, and the ability to extract useful work among other results have been recently observed for suspensions of bacteria, such as {\it Bacillus subtilis} \cite{Wu,Sokolov2,LepGuaGolPesGol09,Sokolov3,CisKesGanGol11}. The striking experimental observations on the effective viscosity provide the motivation for studying a suspension's effective properties; namely, the observation of a seven-fold reduction in the effective viscosity of a suspension of swimming {\it B. subtilis} \cite{Sokolov}. This reduction is observed below $2\%$ volume fraction typically referred to as the {\it dilute regime} where bacteria are far apart and essentially interact with the background fluid only. With the assumption of no interbacterial interactions, this regime has been studied analytically in recent works (e.g., \cite{Saintillan1,Haines1,Haines2,Haines3}). There bacterial tumbling was introduced in order for the formula to predict a decrease in the effective viscosity \cite{Haines3}. However, in the absence of tumbling (e.g., for anaerobic bacteria) the decrease is still observed experimentally \cite{Sokolov}. It was shown recently in \cite{RyaHaiBerZieAra11} that interbacterial interactions substantially contribute to effective viscosity and an estimate for this contribution was given. Rigorous analysis of this contribution and its corresponding effect on the effective viscosity of the suspension is the main component of this paper. We begin with an individual based model (IBM) previously introduced in \cite{RyaHaiBerZieAra11,RyaBerHaiKar12}, which has been successfully used to capture the decrease in the effective viscosity and other collective phenomena. Such suspensions, where interbacterial interactions play an important role and are modeled as a sum of pairwise interactions, are referred to as {\it semi-dilute}. Our goal is to identify the underlying mechanisms that contribute to the decrease of the effective viscosity in this concentration regime. The main tool we employ is a kinetic theory derived from this individual based model. The purpose for employing a kinetic approach is to replace a large system of coupled differential equations by a single continuum partial differential equation with respect to a probability distribution of bacteria positions and orientations. Note that it is natural to consider probabilistic quantities since the main focus of this work is the study of the effective properties. The main computational advantage of the kinetic approach is that the number of bacteria $N$ does not increase the complexity of the problem \cite{Spoh91,BerJabPot13}. Namely, the PDE could be solved numerically with a fixed spatial or temporal grid independent of $N$. In addition to the ability to consider many different initial conditions at once, another advantage to introducing this probabilistic framework is to consider the limiting regime as $N \to \infty$, the so-called {\it mean field limit}. More information on kinetic equations can be found in the seminal works of the 1970's \cite{NeuWic74,BraHep77,Dob79} or more contemporary reviews \cite{Car10,JabPer00,Per04,Deg04}. Significant difficulty in the analysis come from the incorporation of interactions. First, they appear in the kinetic equation as a non-local term due to the fact that the suspension of interacting bacteria is generally described analytically by configurations of \emph{all} bacteria. Second, the main interactions that are taken into account are hydrodynamic, which diverge as bacteria approach one another as the square of their distance. This results in a singular kernel in this non-local term. Thus, the kinetic equation consists of a nonlocal, nonlinear PDE due to the presence of interactions. Using a kinetic approach, the main result of this paper is an explicit asymptotic formula for the effective viscosity with interbacterial interactions taken into account. The formula reveals the physical mechanisms necessary for the decrease in effective viscosity observed experimentally. To achieve this result we first find the steady state solution of the kinetic equation and then use this solution to compute the effective viscosity. For completeness, we also establish the well-posedness of the kinetic equation. This paper is organized as follows. Section~\ref{model} begins by introducing the individual based model under consideration for a semi-dilute bacterial suspension. From this, the kinetic equation for the orientation distribution is formally derived. The reason we begin with the IBM is that the effective properties of a suspension are derived from knowledge of microscopic configurations, which is transferred from the IBM to the kinetic model. In Section~\ref{assumption} we introduce the main conditions under which we derive the asymptotic formula for the effective viscosity and discuss their physical significance. Section \ref{derivodens} contains the derivation of the asymptotic steady state solution to the kinetic equation for the orientation distribution in the limit of small non-sphericity. The effective viscosity from the asymptotic formula is then compared to the same quantity computed from direct simulations of the individual based model in Section \ref{evform}. The important physical mechanisms for the decrease in viscosity are identified and the orientation distribution is compared to the results of previous works in the dilute case. In addition, the normal stress differences and relaxation time are considered. The existence, uniqueness and regularity properties of a solution to the kinetic PDE are proven in Section \ref{kin_solvable}. Finally, we formulate our conclusions and outline potential future investigations in Section~\ref{conc}. \section{\bf Model for Semidilute Bacterial Suspensions}\label{model} We begin by introducing the coupled PDE/ODE system governing the fluid and bacteria dynamics respectively. Each bacterium is represented as a point force dipole. One force represents the bacterium's propulsion mechanism (e.g., flagellar motion) and the other is the opposing viscous drag exerted by the bacterium's body on the fluid. This approximation has been experimentally verified by observing the flow due to a bacterium (e.g., {\it Bacillus subtilis}) in a fluid and comparing it to that of a force dipole \cite{DreDunCisGanGol11}. As a bacterium swims through the fluid its trajectory may be altered through interactions with other bacteria and the background flow. At every moment in time a bacterium propels itself in the direction in which it is oriented. If one bacterium comes into close contact with another, then a collision can occur altering the bacterium's position. This is modeled by an excluded volume potential. Finally, the flow itself has an impact on a bacterium trajectory through the ambient background flow and the sum of flows induced from the propulsion of all the other bacteria on its position. To make these ideas more concrete we now introduce an individual based model (IBM), which governs a bacterium's position and orientation. We consider $N$ bacteria with the position of the center of mass of the $i$th bacterium ${\bf x}^i = (x^i,y^i,z^i)$ and orientation ${\bf d}^i = (d_1^i,d_2^i,d_3^i)$. A bacterium's translational velocity is derived from a balance of forces due to self-propulsion, collisions, and the flow field acting on the position of the bacterium. A bacterium's orientation velocity is derived from a balance of torques in the form of Jeffery's equation for an ellipsoid in a linear flow with additional terms due to the flows generated by the other bacteria in the suspension \cite{Jef22}. Thus, the equations of motion for bacterial positions ${\bf x}$ and orientations ${\bf d}$ originally introduced from first principles in \cite{RyaHaiBerZieAra11} are \begin{eqnarray} \dot{\bf x}^i & =& V_0{\bf d}^i + \sum_{j \neq i} \left( {\bf u}^j({\bf x}^i,{\bf d}^j) + {\bf F}^j({\bf x}^i)\right) + {\bf u}^{\text{BG}}({\bf x}^i), \label{rij-0} \displaystyle\\ \dot{\bf d}^i & =& -\frac{1}{2}{\bf d}^i \times \left(\boldsymbol{\omega}^{\text{BG}}_0({\bf x}^i) + \sum_{j \neq i} \boldsymbol{\omega}^j({\bf x}^i,{\bf d}^j)\right)\nonumber\\&&\hspace{15 pt}-{\bf d}^i \times \left[B{\bf d}^i \times \left({\bf E}^{\text{BG}}_0({\bf x}^i) + \sum_{j \neq i} {\bf E}^j({\bf x}^i,{\bf d}^j)\right) {\bf d}^i\right] + \sqrt{2D}\dot{W}, \displaystyle \label{dij} \end{eqnarray} where $V_0$ is an individual bacterium's swimming speed and $B$ is the Bretherton constant which takes into account the geometry of the bacterium's body ($B\ll 1$: near spherical, $B \approx 1$: needle-like). The externally-imposed planar shear flow contributes to each bacterium's motion through the fluid velocity, ${\bf u}^{BG} = (0, \gamma x, 0)^T$, as well as its effect on a bacterium's orientation through the vorticity $\boldsymbol{\omega}^{\text{BG}}_0 = \nabla_{{\bf x}} \times {\bf u}^{BG}$ and rate of strain ${\bf E}^{\text{BG}}_0 = \frac{1}{2}\left(\nabla_{{\bf x}} {\bf u}^{BG} + (\nabla_{{\bf x}} {\bf u}^{BG})^T\right)$. Here $W$ is a white noise and we let $D \sim B^2$ be the diffusion coefficient. This order of $D$ will be used throughout this work and represents the idea that the random motion present in the system has a greater effect the more elongated a particle is The additional terms in Jeffrey's equation \eqref{dij} beyond the contribution from the background flow are due to the vorticity $ \boldsymbol{\omega}^j$ and rate of strain ${\bf E}^j$ generated by the $j$-th dipole on position of the $i$-th dipole \begin{align*} \boldsymbol{\omega}^j &= \nabla_{{\bf x}} \times {\bf u}^j, \qquad {\bf E}^j = \frac{1}{2}\left(\nabla_{{\bf x}} {\bf u}^j + (\nabla_{{\bf x}} {\bf u}^j)^T\right). \end{align*} Each of these terms depends on the fluid velocity ${\bf u}^{j}$, which is governed by Stokes equation and will be described in greater detail below. \begin{remark} The equations of motion \eqref{rij-0}-\eqref{dij} are a $5N$ coupled system of ordinary differential equations in comparison to the dilute case studied in \cite{Haines3} where there were only two ODEs governing the evolution of a single bacterium in an infinite medium (only depending on a single bacterium's orientation). Thus, the semi-dilute system of equations adds a greater complexity than the dilute case previously studied. \end{remark} The use of Stokes equation to model the fluid is justified by estimating the Reynold's number. Based on the typical size $\ell_0 \sim 1\;\mu \text{m}$ and swimming speed $V_0 \sim 20 \;\mu \text{m}/\text{s}$ of a bacterium, in addition to the typical dynamic viscosity $\eta_0 \sim \;10^{-3}\; \text{Pa}\cdot \text{s}$ and density $\rho \sim \;10^{3}\; \text{kg}/\text{m}^3$ of the suspending fluid, the flow has a Reynolds number $Re$ around $2\times 10^{-5} \ll 1$. Thus, inertial effects can be neglected. Also, it is assumed that a steady-state flow is established on a timescale much smaller than the characteristic timescale, which is the time for a bacterium to swim its length. The flow at the position of bacterium $i$ due to bacterium $j$ is given by ${\bf u}^j({\bf x}^i,{\bf d}^j) = {\bf u}({\bf x}^j-{\bf x}^i,{\bf d}^j)$ where ${\bf u}({\bf x},{\bf d})$ is a solution of the Stokes problem \begin{equation}\label{stokeseqnsingle} \left\{ \begin{array}{lr} \displaystyle \eta_0\Delta_{{\bf x}}\mathbf{u}(\mathbf{x},{\bf d}) - \nabla_{{\bf x}} p(\mathbf{x},{\bf d}) = \nabla_{{\bf x}} \cdot \bigl[ {\bf D}(\mathbf{d}) \delta(\mathbf{x})\bigr], & \mathbf{x} \in \mathbb{R}^3,\\ \nabla_{{\bf x}} \cdot \mathbf{u}(\mathbf{x},{\bf d}) = 0, & \mathbf{x} \in \mathbb{R}^3,\\ \mathbf{u}({\bf x},{\bf d}) \to 0, & |{\bf x}| \to \infty,\\ \end{array} \right. \end{equation} where $\eta_0$ is the ambient fluid viscosity and $p$ is the pressure. The dipole tensor ${\bf D}=\{D_{lm}\}$ is given by \begin{equation}\label{dipolemoment} D_{lm}(\mathbf{d}) := U_0\left(d_ld_m - \frac{1}{3}\delta_{lm}\right), \end{equation} where $U_0$ is the strength of the dipole referred to as the {\it dipole moment}. For pushers, bacteria that propel themselves from behind such as {\it B. subtilis}, $U_0<0$. Equation \eqref{stokeseqnsingle} has an explicit solution: \begin{equation} u_k({\bf x}, {\bf d}) := \frac{1}{8\pi\eta_0}\sum_{l=1}^3\sum_{m=1}^3 D_{lm}({\bf d})\mathcal{G}_{kl,m}({\bf x}), \end{equation} where $\mathcal{G}_{kl}({\bf x}) = \frac{1}{8\pi\eta_0}\left(\frac{\delta_{kl}}{|{\bf x}|}+\frac{{x}_k{x}_l}{|{\bf x}|^3}\right)$ is the Oseen tensor. \begin{remark} In order to study the role of interactions in semi-dilute suspensions it is natural to deal with a point representation of swimmers such that the whole suspension is modeled by points interacting in the fluid. In our paper, a swimmer is represented by a point force dipole with the dipole tensor \eqref{dipolemoment}. In general, for a given model of a swimmer, such a point representation can by found as the second order term in the multipole expansion, see \cite{KimKar91}. {\it We note that all results of this paper such as the asymptotic formula for orientation distribution and effective viscosity can be easily modified to semi-dilute suspensions with swimmers whose dipole tensor is different from \eqref{dipolemoment}.} \end{remark} In order to analyze the system \eqref{rij-0}-\eqref{dij}, the associated kinetic theory for the probability density of bacterial configurations (positions and orientations of each bacterium) is studied. In general, to derive the corresponding kinetic equation one assumes that initial conditions are random. Then each sum in the equations of motion is a sum of identically distributed random variables. The key step in the formal derivation of the kinetic equation is replacing all sums in the equations of motion by their expectations \cite{Poz00,Spoh91,Jab14}. This allows one to replace all the sums representing interactions by integrals with respect to a probability density function $P (t, {\bf x}, {\bf d})$ of finding a given bacterium at position ${\bf x}$ with orientation ${\bf d}$. By replacing the sums with integrals in the system \eqref{rij-0}-\eqref{dij} and enforcing conservation of probability, a standard Fokker-Planck equation describing the evolution of the density $P$ is obtained \begin{equation}\label{init} \partial_t P + \nabla_{\bf x} \cdot ({\bf V} P)+ \nabla_{\bf d} \cdot (\boldsymbol{\Omega} P) - D\Delta_{\bf d} P=0, \end{equation} where the translational and orientation fluxes are defined by { \begin{eqnarray} \hspace{-.2in} {\bf V}({\bf x},{\bf d}) &:=& V_0{\bf d} + \frac{1}{|V_L|}\int_{S^2}\int_{V_L} {\bf u} P({\bf x}',{\bf d}')d{\bf x}'dS_{{\bf d}'}+{\bf u}^{BG}({\bf x}),\label{ktrij}\\ \hspace{-.2in} \boldsymbol{\Omega}({\bf x},{\bf d}) &:=& \frac{1}{|V_L|}\int_{S^2} \langle \boldsymbol{\omega}+B{\bf E},P({\bf x}',{\bf d}') \rangle_{{\bf x}'} dS_{{\bf d}'}+\boldsymbol{\omega}^{BG}({\bf d})+B{\bf E}^{BG}({\bf d}).\label{ktdij} \end{eqnarray}} Here $<\cdot , \cdot >$ denotes the duality with respect to the $L^2$-norm, $V_L := [-L,L]^3$, and we neglect the Lennard--Jones term $F$ due to the fact that collisions only play a small role at low concentrations. The functions $u,\boldsymbol{\omega},$ and ${\bf E}$ under the integral sign depend on ${\bf x}-{\bf x}',{\bf d}$ and ${\bf d}'$, and they are defined as follows \begin{eqnarray} &{\bf u}({\bf x}, {\bf d}):=\frac{U_0}{8\pi\eta_0}\nabla_{\bf x}\cdot \left[({\bf d}{\bf d}-I/3)\mathcal{G}({\bf x})\right],&\nonumber\\ &\boldsymbol{\omega}({\bf x},{\bf d},{\bf d}'):=-\frac{1}{2}\bf d\times\left[\nabla_{\bf x}\times {\bf u}({\bf x},{\bf d}')\right],\label{defn}&\\ &\;{\bf E}({\bf x},{\bf d},{\bf d}'):=-{\bf d} \times\left[{\bf d} \times D_{\bf x}({\bf u}({\bf x},{\bf d}')){\bf d}\right],& \nonumber \end{eqnarray} where $D_{{\bf x}}({\bf u}) := \frac{1}{2}(\nabla_{\bf x} {\bf u} + [\nabla_{{\bf x}} {\bf u}]^T)$ represents the symmetric gradient and $I$ is the identity matrix. Also, $\boldsymbol{\omega}^{BG}({\bf d})$ and ${\bf E}^{BG}({\bf d})$ are defined in the same way as \eqref{defn}, but with the fluid velocity ${\bf u}$ replaced with the background flow ${\bf u}^{BG}$. \begin{remark} Since $\boldsymbol{\omega}, {\bf E} \sim \frac{1}{|{\bf x}-{\bf x}'|^3}$, the integrals with respect to the spatial variables must be considered in the distributional or principal value sense (which are equivalent here). Namely, \begin{equation*} <\frac{\partial u_i}{\partial x_j},\varphi>=C_{ij}({\bf d})\varphi(0)+\int\limits\frac{\partial u_i}{\partial x_j}(\varphi({\bf x})-\varphi(0))d{{\bf x}}, \end{equation*} where \begin{equation*} C_{ij}({\bf d})=\lim\limits_{\varepsilon \rightarrow 0}\int\limits_{|{\bf x}|=\varepsilon}u_in_jds_{{\bf x}}. \end{equation*} \end{remark} The orientation vector ${\bf d} \in \mathcal{S}^2$ can be represented by two independent angles in spherical coordinates \begin{eqnarray}\label{d_spherical} &{\bf d} :=(\cos\alpha \sin\beta,\sin\alpha \sin\beta,\cos\beta)=(d_1,d_2,d_3), \end{eqnarray} for azimuthal angle $\alpha \in [0,2\pi)$ and polar angle $\beta \in [0,\pi)$ with unit basis vectors $\hat{\alpha} :=(-\sin\alpha,\cos\alpha,0)$ and $\hat{\beta} :=(\cos\alpha\cos\beta,\sin\alpha\cos\beta,-\sin\beta)$ respectively. Here one must be careful to note that the divergence and the Laplacian in orientations (the Laplace-Beltrami operator) in \eqref{init} are taken over the unit sphere. In particular, for any field $A=A({\bf d})$ the following definition holds \begin{eqnarray}\nonumber \nabla_{{\bf d}}\cdot A &:=&\frac{1}{\sin\beta}\left[\partial _{\alpha}(A_{\alpha})+\partial_{\beta}(\sin\beta A_{\beta})\right] \\ &\hspace{5 pt}=& \tilde{\nabla}_{{\bf d}}\cdot A-\left.\frac{\partial }{\partial |{\bf d}|}\left\{|{\bf d}|^2(A\cdot {\bf d})\right\}\right|_{|{\bf d}|=1},\label{div2} \end{eqnarray} where $A_{\alpha}=A\cdot\hat{\alpha}$, $A_{\beta}=A\cdot \hat{\beta}$, and $\tilde{\nabla}_{{\bf d}}$ is the classical gradient. \subsection{\bf Definition of the effective viscosity for a suspension of point force dipoles} To define the effective viscosity consider the contributions to stress: (i) due to dipolar hydrodynamic interactions $$ \Sigma_{lm}^d(\overline{\bf d}) := \sum_{i = 1}^N \frac{U_0}{|V_L|}(d_ld_m - \delta_{lm}/3),\;\;l,m=1,2,3, $$ depending only on each particle's orientation \cite{Bat70} and (ii) due to soft collisions (the excluded volume constraints) $$ \Sigma^{LJ}_{lm}(\overline{\bf x}) := \sum_{i=1}^N \sum_{j \neq i} \frac{ F_l({\bf x}^i-{\bf x}^j) (x_m^i-x_m^j)}{|V_L|},\;\;l,m=1,2,3, $$ depending only on the relative positions of each bacterium \cite{Ziebert}. Both are combined to form the total stress due to interactions first used in \cite{RyaHaiBerZieAra11,RyaBerHaiKar12}. We assume that all bacteria are in the volume $V_L$ at any instant of time. The bacterial configurations are denoted by $\overline{\bf x} := ({\bf x}^1,...,{\bf x}^N)$ and $\overline{\bf d} := ({\bf d}^1,...,{\bf d}^N)$. The ultimate goal is to compute the effective viscosity due to hydrodynamic interactions at low concentrations for comparison with experimental observation \cite{Sokolov} and numerical simulations. At lower concentrations $\phi$, where the striking experimental decrease in the effective viscosity was observed, the contribution due to collisions is relatively small and for the proceeding analysis will be neglected \begin{equation}\label{eqn3141} \Sigma({\bf x}, {\bf d}) = \Sigma^d({\bf d}) + \Sigma^{LJ}({\bf x}) \approx \Sigma^d({\bf d}), \quad \text{for $\phi$ small}. \end{equation} The exact concentration interval where the formula \eqref{eqn3141} works well will be determined later by comparison with direct numerical simulations of the suspension. Thus, it is sufficient to restrict attention to the density of orientations denoted $P_{{\bf d}}({\bf d})$ defined as \begin{equation} P_{{\bf d}}({\bf d}) := \frac{1}{N} \int_{V_L} P({\bf x}, {\bf d}) d{\bf x}, \qquad \text{where} \quad \int_{\mathcal{S}^2} P_{{\bf d}}({\bf d}) = 1. \end{equation} For comparison with experiment, the main quantity of interest is the shear viscosity or component $\eta_{1212}$ of the fourth order viscosity tensor relating the stress to the strain, henceforth, denoted as $\hat{\eta}$. We define the effective viscosity as the averaged ratio of the corresponding components of the stress and strain tensors \begin{equation} \frac{\hat{\eta} - \eta_0}{\eta_0} := \frac{1}{|V_L|} \int_{V_L} \int_{\mathcal{S}^2} \frac{\Sigma_{xy}}{\gamma} P({\bf x}, {\bf d}) d{\bf x}dS_{{\bf d}} = \frac{\rho}{\gamma}\int_{\mathcal{S}^2} \Sigma_{xy}^d({\bf d}) P_{\bf d}({\bf d})dS_{{\bf d}},\label{evsep} \end{equation} as in \cite{RyaHaiBerZieAra11,RyaBerHaiKar12}. Here $\rho=N/|V_L|$ is the mean concentration or number density. The following nonlinear, nonlocal integro-differential equation describes the evolution of the orientation density $P_{{\bf d}}(t,{\bf d})$ \begin{equation}\label{cons2} \partial_t P_{{\bf d}}(t,{\bf d}) = -\nabla_{\bf d} \cdot \left(< {\bf \Omega} >_{\bf x}P_{\bf d}(t,{\bf d})\right), \end{equation} where $< {\bf \Omega} >_{\bf x} = \frac{1}{N}\int_{V_L} {\bf \Omega} P_{\bf x}(t,{\bf x}) d{\bf x}$, ${\bf \Omega}$ contains the background flow and interaction terms {\begin{equation*} {\bf \Omega }(t,{\bf x},{\bf d}) = \boldsymbol{\omega}^{BG} +{\bf E }^{BG} + \frac{1}{N|V_L|} \int_{\mathcal{S}^2} \int_{V_L} \langle \boldsymbol{\omega} + B{\bf E} , P (t,{\bf x}^\prime, {\bf d}^\prime) \rangle d {\bf x}^\prime d {\bf d}^\prime. \end{equation*}} Equation \eqref{cons2} is obtained by integrating \eqref{init} in ${\bf x}$ and dividing by $N$. \begin{remark}\label{remark1-ev} In this work, lower concentrations of bacteria are considered where the primary contribution to the effective viscosity from interactions is the dipolar component of the stress, $\Sigma^d$, which only depends on the set of bacterium orientations. Thus, the $\dot{\bf x}$ equation will not factor into the final formula; however, ${\bf F}$ is the force associated to a truncated Lennard-Jones type potential imposing excluded volume constraints. For more information on its definition and why it is needed for global solvability see \cite{RyaBerHaiKar12}. This quantity still remains in the original coupled ODE system used for simulations to ensure that particles remain a finite distance apart avoiding an artificial divergence in the fluid velocity ${\bf u} \sim 1/|{\bf x}^i - {\bf x}^j|$ (see section~\ref{compsim}). \end{remark} \section{\bf Conditions imposed to derive an explicit formula for the effective viscosity}\label{assumption} To calculate the effective viscosity we impose three conditions to make the system more amenable to mathematical analysis. % \subsection{\bf Separation of variables} In this paper only small concentrations are considered where collisions are not important, yet the flow of each bacterium affects all others. The bacteria are at large distances apart and, thus, since the background flow provides the major contribution to bacterial motion, then distributions of positions and orientations become {\it essentially independent} of one another. This can be justified from the experimental work of {\it Aranson et al.} (e.g., see \cite{Sokolov2,Aranson1}). Henceforth, it is assumed that the positions and orientations are decoupled. \vspace{.1in} \noindent{\bf Condition (C1):} The density $P({\bf x}, {\bf d})$ can be written as \begin{equation}\label{assump_separ} P({\bf x}, {\bf d}) = P_{{\bf x}}({\bf x})P_{{\bf d}}({\bf d}) \qquad \text{ (separation of variables),} \end{equation} where $ P_{{\bf d}}({\bf d}) = \frac{1}{N}\int_{V_L} P({\bf x}, {\bf d})d {\bf x} $ and $\int_{\mathcal{S}^2} P_{{\bf d}}({\bf d}) dS_{{\bf d}} = 1$. Here $N$ is the number of bacteria, $\text{supp}(P_{{\bf x}}({\bf x}))~\subset~V_L$, where the spatial density $P_{{\bf x}}({\bf x})$ can be found by $P_{{\bf x}}({\bf x}) = \int_{\mathcal{S}^2} P({\bf x},{\bf d}) dS_{{\bf d}}$ \vspace{.1in} This condition is used twice. First, the effective viscosity at low concentration only depends on the orientation (see Remark~\ref{remark1-ev}). Thus, using condition (C1) an explicit equation for the evolution of the orientation distribution can be derived from \eqref{cons2}. Second, ${\bf V}$ formally contains diverging integrals (e.g., $\int \int {\bf F} d{{\bf x}}dS_{{\bf d}}$ since ${\bf F} \sim |{\bf x}|^{-12}$), which will no longer be present in the equation for the orientation distribution $P_{\bf d}({\bf d})$ allowing for further mathematical analysis. It will be observed at the end of this work that the asymptotic expansion for $P_{{\bf d}}({\bf d})$ depends on $P_{{\bf x}}({\bf x})$ through the coefficients, thus all the information about spatial patterns is preserved. \subsection{\bf Existence of a steady state $P_{{\bf d}}({\bf d})$} A steady state solution to \eqref{cons2} is defined as follows \begin{definition} $\hat{P}_{{\bf d}}({\bf d})$ is called a steady state solution to \eqref{cons2} if it solves \begin{equation*} 0 = -\nabla_{\bf d} \cdot \left(< {\bf \Omega} >_{\bf x}\hat{P}_{\bf d}({\bf d})\right). \end{equation*} \end{definition} To compute time independent effective viscosity we impose the following condition. \vspace{.1 in} \noindent{\bf Condition (C2):} There exists a {\it nontrivial} steady state solution to \eqref{cons2}. \vspace{.1 in} First, note that there is no trivial steady state unless $B=0$ in which case we find the uniform orientation distribution ${P}_{{\bf d}}({\bf d}) = \frac{1}{4\pi}$. This can be obtained both in the limit as $B \to 0$ in the asymptotic results derived herein for $P_{{\bf d}}({\bf d})$ and from observing that the trivial steady state would be a constant satisfying the constraint $\int_{\mathcal{S}^2} P_{{\bf d}}({\bf d}) dS_{{\bf d}} = 1$. One still needs to prove the existence of a steady state in the general case $B \neq 0$. The condition (C2) can be formulated as a theorem and its proof may be the topic of a future work. Here we remain focused on the study of the effective viscosity. \subsection{\bf $P_{{\bf x}}({\bf x})$ is constant in the $z$-direction}\label{ass_a3} We assume that $P_{{\bf x}}({\bf x})$ is constant in $z$ for the case of the planar shear background flow under consideration in this work. This is consistent with past numerical observations by {\it Ryan et al.}~\cite{RyaHaiBerZieAra11} and experimental observation in \cite{SokGolFelAra09} since the suspension remains below any critical concentration for three-dimensional collective motion. Also, collective motion even in full 3D experiments and simulations in planar shear flow has been observed to be essentially 2D in the shearing plane \cite{SokGolFelAra09}. Thus, following experimental observation, we assume the same. \vspace{.1in} \noindent{\bf Condition (C3):} The density $P_{{\bf x}}({\bf x})$ is constant in $z$. \vspace{.1in} The condition (C3) essentially follows from the physical setup of the {\it quasi-2D} thin film suspension. In Appendix \ref{pxapp} we show that the condition (C3) leads to the following representation formula for the Fourier transform of the spatial distribution $F[P_{{\bf x}}]$: \begin{equation}\label{ass_a3_formula} ( F[P_{{\bf x}}])^2=\delta(k_3)\hat{P}^2_{12} (k_1,k_2). \end{equation} Here ${\bf k}=(k_1,k_2,k_3)$ is the Fourier variable, and $\hat{P}_{12} (k_1,k_2)$ is a smooth function defined in ${\bf k}$-space independent of $k_3$. \section{\bf Derivation of asymptotic expression for $P_{\bf d}$ for small $B$}\label{derivodens} In this section, an expression for the orientation distribution $P_{{\bf d}}({\bf d})$ is derived. Since \eqref{cons2} is a nonlinear integro-differential equation it is challenging, in general, to find an analytical solution. Thus, we look for $P_{{\bf d}}({\bf d})$ by asymptotic expansion in the limit of small non-sphericity ($B \ll 1$). This will allow us to apply analytical techniques and derive an expression, which will provide physical insight into the mechanisms contributing to the decrease in the effective viscosity. Rewrite the equation for the orientation density $P_{{\bf d}}({\bf d})$ \eqref{cons2} as (the argument $t$ is suppressed for simpler notation) \begin{equation}\label{angleeq} \partial_t P_{{\bf d}}+\nabla_{{\bf d}}\cdot \left[(\boldsymbol{\omega}^{BG}+B{\bf E}^{BG})P_{{\bf d}}\right]+ \frac{1}{N|V_L|}\int \nabla_{{\bf d}}\cdot (\hat{\boldsymbol{\Omega}} P({\bf x},{\bf d})) d{\bf x}=0, \end{equation} where \begin{equation} \hat{\boldsymbol \Omega }({\bf x},{\bf d}) := \frac{1}{|V_L|} \int_{\mathcal{S}^2} \langle \boldsymbol{\omega} + B{\bf E} , P_{\bf x} ({\bf x}^\prime) \rangle_{{\bf x}'} P_{\bf d}({\bf d}^\prime) d {\bf d}^\prime. \label{deqn} \end{equation} Herein $\hat{\boldsymbol{\Omega}}$ will denote the component of the orientational flux ${\boldsymbol{\Omega}}$ due to interactions. Observe that the $\boldsymbol{\omega}$ and ${\bf E}$ are functions of ${\bf x}-{\bf x}'$, ${\bf d}$, and ${\bf d}'$. Using Condition (C1) defined in \eqref{assump_separ} we obtain a closed form equation for a steady state $P_{{\bf d}}({\bf d})$ (provided that $P_{{\bf x}}$ is given): \begin{eqnarray} 0 &=& \nabla_{{\bf d}}\cdot \left[(\boldsymbol{\omega}^{BG}+B{\bf E}^{BG})P_{{\bf d}}({\bf d})\right] \nonumber\\ &&\hspace{20 pt}+\frac{1}{N|V_L|}\int_{V_L} \nabla_{\bf d} \cdot \left(\hat{\bf \Omega}({\bf x},{\bf d},{\bf d}') P_{{\bf x}}({\bf x})P_{{\bf d}}({\bf d}) \right)dS_{{\bf d}'} d{\bf x}.\label{sstate} \end{eqnarray} The first term in \eqref{sstate} is the contribution due to the background planar shear flow: \begin{align} \nabla_{{\bf d}}\cdot \left[(\boldsymbol{\omega}^{BG}({\bf d})+B{\bf E}^{BG}({\bf d}))P_{{\bf d}}({\bf d})\right] &= -\frac{3\gamma B}{2}\sin^2\beta\sin 2\alpha P_{{\bf d}}({\bf d})\nonumber\\&\quad +\frac{\gamma}{2}(1+B\cos 2\alpha)\{\partial_{\alpha}P_{{\bf d}}({\bf d})\}\label{eqn:bgflow_0}\\ &\quad +\frac{\gamma B}{4}\sin 2\alpha\sin 2\beta \{\partial_{\beta}P_{{\bf d}}({\bf d})\}.\nonumber \end{align} The second term in \eqref{sstate} is the contribution of hydrodynamic interactions between bacteria. % Notice the convolution form of the nonlocal terms in the spatial variable. In the next section, the Fourier transform will be utilized to compute quantities necessary to derive the formula for the effective viscosity. Specifically, using tools such as Parseval's Theorem, one can take the spatial integrals and consider them in Fourier space where they will prove easier to analyze. After using the separation of variables \eqref{assump_separ}, the density will be expressed in terms of the Fourier frequencies ${\bf k}$. The main goal for the remainder of this section is to write the system in a convenient form for using the Fourier transform. This idea follows naturally from the aforementioned observation that all the interactions terms take the form of a convolution. Introduce the Fourier transform $C({\bf k}):=F[P_{{\bf x}}]({\bf k})$: \begin{equation} P_{{\bf x}}({\bf x})=\frac{1}{(2\pi)^3}\int e^{i{\bf k}\cdot {\bf x}}C({\bf k})d{\bf k}. \end{equation} Define ${\bf H}({\bf x} - {\bf x}',{\bf d},{\bf d}') := \boldsymbol{\omega}({\bf x}-{\bf x}', {\bf d}, {\bf d}') + B{\bf E}({\bf x}-{\bf x}', {\bf d}, {\bf d}')$, then the following equalities hold \begin{equation} <{\bf H} \star P_{{\bf x}}, P_{{\bf x}}>_{{\bf x}}=<F[{\bf H} \star P_{{\bf x}}], F[P_{{\bf x}}]>_{{\bf k}}=<F[{\bf H}],(F[P_{{\bf x}}])^2>_{{\bf k}}, \end{equation} where $\star$ and $F$ stand for convolution and Fourier transform, respectively. The first equality is Parseval's identity and the second is the fact that the Fourier transform of a convolution is the product of Fourier transforms. Thus, one can rewrite equation \eqref{sstate} in the following form \begin{eqnarray} && \nabla_{{\bf d}}\cdot \left[(\boldsymbol{\omega}^{BG}+B{\bf E}^{BG})P_{{\bf d}}({\bf d})\right]\nonumber\\&&\hspace{30 pt}+\int _{S^2}\nabla_{{\bf d}}\cdot \left\{P_{{\bf d}}({\bf d})P_{{\bf d}}({\bf d}')<F[{\bf H}](F[P_{{\bf x}}])^2>_{{\bf k}}\right\}dS_{{\bf d}'}=0.\label{lioumod} \end{eqnarray} In order to compute $F[{\bf H}]$ one must first understand how the Fourier transform acts on the fluid velocity ${\bf u}$ and its derivatives. \subsection{\bf Evaluation of Fourier transforms}\label{sovft} In order to analyze \eqref{lioumod}, an analytical expression for the Fourier transform $F[{\bf H}] = F[\boldsymbol{\omega}] + BF[{\bf E}]$ is needed. Both terms depend on the fluid velocity ${\bf u}$ defined by \eqref{stokeseqnsingle}. Recall the dipolar stress \begin{equation}\label{def_of_sigma} \Sigma({\bf x}, {\bf d}) ={\bf D}({\bf d})\delta({{\bf x}})= U_0({\bf d} {\bf d} - I/3)\delta({\bf x}). \end{equation} Then the Stokes equation in \eqref{stokeseqnsingle} can be written as \begin{equation}\label{Stokes} -\eta_0\Delta_{{\bf x}}{\bf u}+\nabla_{{\bf x}}p=\nabla_{{\bf x}}\cdot \Sigma({\bf x},{\bf d}), \;\;\nabla_{{\bf x}}\cdot {\bf u}=0. \end{equation} Denote the Fourier transform of a function $f(x)$ as $$\tilde{f}({\bf k})=F\left[f\right]({\bf k})=\int e^{-i({\bf k}\cdot {\bf x})}f({\bf x})d{\bf x},$$ and compute the Fourier transform of ${\bf u}$ and the symmetric gradient $D_{{\bf x}}({\bf u})$. \begin{proposition}\label{propu} Let ${\bf u}$ be a solution of \eqref{stokeseqnsingle} and let $\Sigma$ be defined by \eqref{def_of_sigma}. Then \begin{eqnarray} &(i)& \tilde{\Sigma}({\bf d}') =U_0\left({\bf d}'{\bf d}'^{*}-I/3\right),\nonumber\\ &(ii)& \tilde{\bf u}({\bf k})=\frac{i}{\eta_0|{\bf k}|}\left(I-\frac{{\bf k}{\bf k}^{*}}{|{\bf k}|^2}\right)\tilde{\Sigma}({\bf k})\frac{{\bf k}}{|{\bf k}|},\label{fourier_u}\\ &(iii)& F\left[D_{{\bf x}}({\bf u})\right]=-\frac{1}{2\eta_0 |{\bf k}|^4}\left(|{\bf k}|^2\tilde{\Sigma} {\bf k}{\bf k}^{*}-2{\bf k}{\bf k}^{*}\tilde{\Sigma}{\bf k}{\bf k}^{*}+|{\bf k}|^2{\bf k}{\bf k}^{*}\tilde{\Sigma}\right).\label{fourier_du} \end{eqnarray} Here $*$ denotes the transpose. \end{proposition} \begin{proof} The part $(i)$ follows from the fact that the Fourier transform of $\delta$-function is 1. We split the proof of $(ii)$ into two steps: First, we find the Fourier transform of the pressure $p$, then by using the first equation in \eqref{stokeseqnsingle} we find $\tilde{\bf u}$.\\ \noindent{\it Step 1: Evaluation of $\tilde{p}=F[p]$}. By taking the divergence of \eqref{Stokes} in ${\bf x}$ we obtain \begin{equation} \Delta_{{\bf x}}p=\nabla_{{\bf x}}\cdot(\nabla_{{\bf x}}\cdot \Sigma).\label{divStokes} \end{equation} Observe that \begin{equation*} F\left[\Delta_{{\bf x}}p\right]=-|{\bf k}|^2\tilde{p}({\bf k}), \;\; F\left[\nabla_{{\bf x}}\cdot(\nabla_{{\bf x}}\cdot\Sigma)\right =\int \Sigma:\nabla_{{\bf x}}^2e^{-i{\bf k}\cdot {\bf x}}d{\bf x}=-\tilde{\Sigma}({\bf k}):{\bf k}{\bf k}^{*}. \end{equation*} Substituting these formulas into \eqref{divStokes} we obtain $-|{\bf k}|^2\tilde{p}({\bf k})=-\tilde{\Sigma}({\bf k}):{\bf k}{\bf k}^{*}$, and, thus, we find an expression for the Fourier transform of the pressure $p$: \begin{equation}\label{eqn:tildep} \tilde{p}({\bf k})=\frac{1}{|{\bf k}|^2}\tilde{\Sigma}({\bf k}):{\bf k}{\bf k}^{*}. \end{equation} \noindent{\it Step 2: Evaluation of $\tilde{\bf u}=F[{\bf u}]$}. Return to Stokes equation \eqref{Stokes} and observe that \begin{eqnarray*} &\eta_0 F\left[\Delta_{{\bf x}}{\bf u}\right]=-\eta_0 |{\bf k}|^2\tilde{\bf u}({\bf k}), \quad F\left[\nabla_{{\bf x}}p\right]=i{\bf k}\tilde{p}({\bf k}),& \\ &F\left[\nabla_{{\bf x}}\cdot \Sigma\right] =i\tilde{\Sigma}({\bf k}){\bf k}.& \end{eqnarray*} Using these relations one finds that $\eta_0|{\bf k}|^2\tilde{\bf u}({\bf k})+i{\bf k}\tilde{p}({\bf k})=i\tilde{\Sigma}({\bf k}){\bf k}$. After rearranging the terms and using \eqref{eqn:tildep} we complete the proof of $(ii)$. To prove $(iii)$ we first observe that $F\left[D_{{\bf x}}({\bf u})\right]=\frac{i}{2} \left(\tilde{\bf u}{\bf k}^{*}+{\bf k} \tilde{\bf u}^{*}\right)$. Plug the Fourier transform of ${\bf u}$ from $(ii)$ into this expression to find \begin{eqnarray*} F\left[D_{{\bf x}}({\bf u}) \right]&=&\frac{i}{2}\left(\tilde{\bf u}{\bf k}^{*}+{\bf k} \tilde{\bf u}^{*}\right)\\&=&-\frac{1}{2\eta_0 |{\bf k}|^2}\left((I-\frac{{\bf k}{\bf k}^{*}}{|{\bf k}|^2})\tilde{\Sigma}({\bf k}){\bf k}{\bf k}^{*}+{\bf k}{\bf k}^{*}\tilde{\Sigma}({\bf k})(I-\frac{{\bf k}{\bf k}^{*}}{|{\bf k}|^2})\right). \end{eqnarray*} Use the fact that $\tilde{\Sigma}$ is symmetric ($\tilde{\Sigma}=\tilde{\Sigma}^{*}$) to complete the proof of $(iii)$. \end{proof} \begin{remark} It is easily seen that $F\left[D_{{\bf x}}({\bf u})\right]$ does not depend on $|{\bf k}|$, since $F\left[D_{{\bf x}}({\bf u})\right]$ can be rewritten as \begin{equation*} F\left[D_{{\bf x}}({\bf u})\right]=-\frac{1}{\eta_0} \tilde{\Sigma} \frac{{\bf k}}{|{\bf k}|}\frac{{\bf k}^{*}}{|{\bf k}|} -\frac{2}{\eta_0}\frac{{\bf k}}{|{\bf k}|} \frac{{\bf k}^{*}}{|{\bf k}|} \tilde{\Sigma}\frac{{\bf k}}{|{\bf k}|} \frac{{\bf k}^{*}}{|{\bf k}|} +\frac{{\bf k}}{\eta_0 |{\bf k}|} \frac{{\bf k}^{*}}{|{\bf k}|}\tilde{\Sigma}. \end{equation*} \end{remark} This subsection is concluded by summarizing the analytical expressions for the two main components of $F[{\bf H}]=F\left[\boldsymbol{\omega}\right]+BF\left[{\bf E}\right]$: \begin{eqnarray} \hspace{-.5in}&&F\left[{\bf E}\right]=-{\bf d}\times \left({\bf d} \times F\left[D_{{\bf x}}({\bf u})\right]{\bf d}\right) =F\left[D_{{\bf x}}({\bf u})\right]{\bf d}-{\bf d}{\bf d}^{*}F\left[D_{{\bf x}}({\bf u})\right]{\bf d}\label{FourierE}\\ \hspace{-.5in}&&F\left[\boldsymbol{\omega}\right] = -\frac{1}{2}{\bf d} \times F\left[\nabla_{{\bf x}} \times {\bf u} \right] = -\frac{1}{2}{\bf d} \times \left[-i{\bf k} \times F[{\bf u}]\right],\label{Fourierc} \end{eqnarray} where $F[{\bf u}]$ and $F[D_{{\bf x}}({\bf u})]$ are given by Proposition \ref{propu}. \subsection{\bf The form of asymptotic expansion in $B$} Recall the steady-state Liouville equation \eqref{lioumod} with the background terms substituted in: \begin{eqnarray} 0&=&-\frac{3\gamma B}{2}\sin^2\beta\sin 2\alpha P_{{\bf d}}({\bf d})+\frac{\gamma}{2}(1+B\cos 2\alpha)\partial_{\alpha}P_{{\bf d}}({\bf d})\nonumber\\&&+ \frac{\gamma B}{4}\sin 2\alpha\sin 2\beta \partial_{\beta}P_{{\bf d}}({\bf d})\nonumber\\ &&+\frac{1}{N|V_L|}\int _{\mathcal{S}^2}\nabla_{{\bf d}}\cdot \left\{P_{{\bf d}}({\bf d})P_{{\bf d}}({\bf d}')<F[{\bf H}],(F[P_{{\bf x}}])^2>_{{\bf k}}\right\}dS_{{\bf d}'}.\label{ssliou} \end{eqnarray} We consider the asymptotic expansion in the Bretherton constant, $B\ll 1$, for the orientation distribution, $P_{{\bf d}}({\bf d})$, up to the second order: \begin{equation}\label{asympexp} P_{{\bf d}}(\alpha, \beta) = P_{{\bf d}}^{(0)}(\alpha, \beta) + P_{{\bf d}}^{(1)}(\alpha, \beta)B + P_{{\bf d}}^{(2)}(\alpha, \beta)B^2 + O(B^3). \end{equation} Substituting \eqref{asympexp} into \eqref{ssliou} we get different equations at different orders of $B$. It is straightforward that $P_{{\bf d}}^{(0)}(\alpha,\beta) = \frac{1}{4\pi}$ (surface area of the unit sphere is $4\pi$) solves the equation at order $O(1)$. We want to consider the asymptotic expansion about the uniform distribution because it has been extensively documented in theory and experiment that as the bacterium bodies become or spherical ($B \to 0$), then the distribution in angles is uniform \cite{RyaHaiBerZieAra11,Haines3}. In the next two subsections, the linear order term $P_{{\bf d}}^{(1)}(\alpha, \beta)$ and quadratic order term $P_{{\bf d}}^{(2)}(\alpha, \beta)$ are computed \subsection{\bf Contribution at $O(B)$ First, notice that $\nabla_{\bf d}\cdot \boldsymbol{\omega} ({\bf x}-{\bf x}',{\bf d},{\bf d}') =0$. Indeed, this follows from \eqref{div2} since $\boldsymbol{\omega}\cdot {\bf d}=0$ and the classical divergence of $\boldsymbol{\omega}$ with respect to ${\bf d}$ is zero (note that $\boldsymbol{\omega}={\bf d}\times A$, where $A=\nabla_{{\bf x}}\times {\bf u}$ does not depend on ${\bf d}$). This observation implies $\nabla_{{\bf d}} \cdot F[{\bf H}] =B\nabla_{{\bf d}} \cdot F[{\bf E}]$. Using this equality and expanding the divergence under the integral sign we rewrite \eqref{ssliou} as follows: \begin{align} 0 &= \frac{\gamma}{2} \left[B\sin(2\alpha)\sin\beta\left(\cos\beta\partial_{\beta}P_{{\bf d}} - 3\sin\beta P_{{\bf d}}\right) + \left(1 + B\cos(2\alpha)\right)\partial_\alpha P_{{\bf d}} \right] \nonumber\\ &\qquad + \frac{B}{N|V_L|} \int_{\mathcal{S}^2} P_{{\bf d}}({\bf d}')P_{{\bf d}}({\bf d})\langle\nabla_{{\bf d}} \cdot (F[{\bf E}({\bf d})])(F[P_{{\bf x}}])^2\rangle_{{\bf k}}dS_{{\bf d}'}\label{divdterm}\\ &\qquad + \frac{1}{N|V_L|} \int_{\mathcal{S}^2} \nabla_{{\bf d}}[P_{{\bf d}}({\bf d})]P_{{\bf d}}({\bf d}') \langle F[{\bf H}({\bf d})](F[P_{{\bf x}}])^2\rangle_{{\bf k}} dS_{{\bf d}'}.\nonumber \end{align} The first integral at $O(B)$ i \begin{equation}\label{OBint} \frac{1}{16\pi^2 N|V_L|} \int_{\mathcal{S}^2} \langle\nabla_{{\bf d}} \cdot (F[{\bf E}({\bf d})])(F[P_{{\bf x}}])^2\rangle_{{\bf k}}dS_{{\bf d}'}, \end{equation} By switching the order of integration and noting $\int_{\mathcal{S}^2} \tilde{\Sigma} dS_{{\bf d}'} = \int_{\mathcal{S}^2} U_0[{\bf d}'({\bf d}')^* - I/3]dS_{{\bf d}'} = 0$ we obtain that \eqref{OBint} is zero using \eqref{FourierE} and \eqref{fourier_du}. Since both $\nabla_{{\bf d}} [P_{\bf d}({\bf d})]$ and $B{\bf E}$ are of the order $O(B)$, the second integral in \eqref{divdterm} at $O(B)$ is $ \frac{1}{4\pi N|V_L|}\int_{\mathcal{S}^2}\nabla_{\bf d}P_{\bf d}^{(1)}({\bf d}) \langle F[\boldsymbol{\omega}](F[P_{{\bf x}}])^2\rangle_{{\bf k}} dS_{{\bf d}'} $ which is also zero due to $\int_{\mathcal{S}^2} U_0({\bf d}'{\bf d}'- I/3) dS_{{\bf d}'} = 0$. Thus, the integral terms do not contribute to equation \eqref{divdterm} at order $O(B)$, and it has the following form \begin{equation} 0 = \frac{\gamma}{2} \left[-3P_{{\bf d}}^{(0)}\sin(2\alpha)\sin^2\beta + \partial_\alpha P_{{\bf d}}^{(1)}\right]. \label{simplifiedOB} \end{equation} After substituting $P_{{\bf d}}^{(0)} = \frac{1}{4\pi}$ and solving \eqref{simplifiedOB}, one finds that \begin{equation}\label{pd_at_order_1} P_{{\bf d}}^{(1)}(\alpha, \beta) = -\frac{3}{8\pi} \sin^2\beta \cos(2\alpha). \end{equation} {\it Since the integral terms are zeros at order $O(B)$, the contribution due to interactions does not appear at order $O(B)$ and thus the only contribution is due to the background flow.} It will be shown later that up to $O(B)$ the contribution to the effective viscosity by the bacteria is zero. This will shed light on the fact that interactions are {\it necessary} to see the decrease in the effective viscosity and the background flow alone is insufficient. Note that even though this is the contribution due to the background flow the strain rate $\gamma$ is not present. Therefore, the magnitude of the flow will not have an effect on the longtime limit of the effective viscosity at $O(B)$. However, once the terms at the next order are computed one observes a competition develop between the background flow and the flow due to inter-bacterial interactions. In this case the magnitude of the shear $\gamma$ becomes important. \subsection{\bf Contribution at $O(B^2)$}\label{contr_at_B2} Consider terms in \eqref{divdterm} of order $O(B^2)$: \begin{align} 0 =& \frac{\gamma}{2} \sin(2\alpha)\sin\beta \cos\beta \partial_{\beta} P_{{\bf d}}^{(1)}({\bf d}) - \frac{3\gamma}{2}\sin(2\alpha)\sin^2(\beta)P_{{\bf d}}^{(1)}({\bf d}) \nonumber \\\quad&+ \frac{\gamma}{2}\partial_{\alpha} P_{{\bf d}}^{(2)}({\bf d}) + \frac{\gamma}{2}\cos(2\alpha)\partial_{\alpha}P_{{\bf d}}^{(1)}({\bf d}) \nonumber\\ \quad &+ \frac{1}{4\pi N|V_L|} \nabla_{{\bf d}} \cdot \int_{\mathcal{S}^2} \langle F[{\bf E}]F[P_{{\bf x}}]^2 \rangle_{{\bf k}} P_{{\bf d}}^{(1)}({\bf d}')dS_{{\bf d}'}\nonumber \\ \quad&+ \frac{1}{4\pi N|V_L|} \int_{\mathcal{S}^2} \nabla_{{\bf d}}[P^{(2)}_{{\bf d}}({\bf d})] \langle F[\boldsymbol{\omega}](F[P_{{\bf x}}])^2\rangle_{{\bf k}} dS_{{\bf d}'}\label{OB2-int}\\ \nonumber \quad &+ \frac{1}{4\pi N|V_L|} \int_{\mathcal{S}^2} \nabla_{{\bf d}}[P^{(1)}_{{\bf d}}({\bf d})] \langle F[{\bf E}](F[P_{{\bf x}}])^2\rangle_{{\bf k}} dS_{{\bf d}'} \nonumber \\\quad&+ \frac{1}{N|V_L|} \int_{\mathcal{S}^2} \nabla_{{\bf d}}[P^{(1)}_{{\bf d}}({\bf d})]P^{(1)}_{{\bf d}}({\bf d}') \langle F[\boldsymbol{\omega}](F[P_{{\bf x}}])^2\rangle_{{\bf k}} dS_{{\bf d}'}.\nonumber \end{align} Denote the four integral terms in equation \eqref{OB2-int} by $\text{I}_1$, $\text{I}_2$, $\text{I}_3$ and $\text{I}_4$, respectively. The following equalities hold: \begin{eqnarray} \text{I}_1&=&\frac{ U_0}{40\pi \eta_0 N|V_L|}\left(A\sin^2\beta\cos(2\alpha) + C\sin^2\beta\sin(2\alpha)\right),\nonumber\\ \text{I}_2&=&\text{I}_3\;=\;0,\nonumber\\ \text{I}_4&=&\frac{3U_0}{10\pi\eta_0 N|V_L|}D\sin(2\alpha)\sin^2\beta, \nonumber \end{eqnarray} where constants $A$, $C$, and $D$ are defined as follows \begin{eqnarray} &A :=\frac{1}{2}\int \sin^2(2\theta)\hat{P}^2_{12}k^2dk d\theta, ~C := -\frac{1}{2}\int \sin(4\theta)\hat{P}^2_{12} k^2dk d\theta,&\nonumber \\&& \label{coefs_ACD} \\ &D := \int \cos(\theta)\sin(\theta) \hat{P}^2_{12} k^2dk d\theta.&\nonumber \end{eqnarray} Here $\hat{P}_{12}$ is from \eqref{ass_a3_formula}, and we use spherical coordinates in the Fourier space $(k=|{\bf k}|,\theta, \phi)$. The calculations of $\text{I}_i$ can be found in Appendix \ref{KinE-OB2}. % % % After substitution of the expressions for each $\text{I}_i$, we get the following equation for $P_{\bf d}^{(2)}({\bf d})$: \begin{align} 0 &= \frac{\gamma}{2} \sin(2\alpha)\sin\beta \cos\beta \partial_{\beta} P_{\bf d}^{(1)}({\bf d}) - \frac{3\gamma}{2}\sin(2\alpha)\sin^2(\beta)P_{\bf d}^{(1)}({\bf d})\nonumber \\ &\qquad+ \frac{\gamma}{2}\partial_{\alpha} P_{\bf d}^{(2)}({\bf d}) + \frac{\gamma}{2}\cos(2\alpha)\partial_{\alpha}P_{\bf d}^{(1)}({\bf d})\label{eqn:frown}\\ & \qquad +\frac{ U_0}{40\pi \eta_0 N|V_L|}\left(A\sin^2\beta\cos(2\alpha) + C\sin^2\beta\sin(2\alpha)\right)\nonumber\\ & \qquad+ \frac{3U_0}{10\pi\eta_0 N|V_L|}D\sin^2\beta\sin(2\alpha).\nonumber \end{align} Based on the form of the equation \eqref{eqn:frown}, the following representation is used to find $P_{\bf d}^{(2)}({\bf d})$: \begin{equation}\label{ansatzP2} P_{\bf d}^{(2)}({\bf d}) = C_1\sin^4\beta\cos(4\alpha) + C_2\sin^2\beta\cos(2\alpha) + C_3\sin^2\beta\sin(2\alpha). \end{equation} In order to find each $C_i$ substitute \eqref{ansatzP2} into \eqref{eqn:frown}: \begin{align*} 0 &= \left[\frac{3\gamma}{8\pi} - 2\gamma C_1\right]\sin(4\alpha)\sin^4\beta + \left[ \gamma C_3 + \frac{ U_0A}{40\pi \eta_0 N|V_L|} \right]\sin^2\beta\cos(2\alpha)\\ & \qquad + \left[-\gamma C_2+ \frac{ U_0C}{40\pi \eta_0 N|V_L|}+\frac{3U_0D}{10\pi\eta_0 N|V_L|} \right] \sin^2\beta\sin(2\alpha). \end{align*} Since the factors are linearly independent, each coefficient is zero and, thus, we find the $C_i$'s: \begin{eqnarray*} &C_1 = \frac{3}{16\pi},\;\;\;C_2 = -\frac{U_0(C + 12D)}{40\gamma\pi \eta_0 N|V_L|},\;\;\;C_3 = -\frac{ U_0A}{40\gamma \pi \eta_0 N|V_L|}.& \end{eqnarray*} Using these coefficients one obtains an explicit formula for the orientation distribution up to $O(B^3)$: \begin{align} P_{\bf d}(\alpha,\beta) &= \frac{1}{4\pi} - \frac{3}{8\pi} \sin^2\beta \cos(2\alpha)B+ \biggl[\frac{3}{16\pi}\sin^4\beta\cos(4\alpha)\nonumber \\ &\qquad -U_0\frac{C + 12D}{40\gamma\pi \eta_0 N|V_L|}\sin^2\beta\cos(2\alpha) \label{formpd} \\ & \qquad -\frac{ U_0A}{40\gamma \pi \eta_0 N|V_L|} \sin^2\beta\sin(2\alpha)\biggr]B^2 + O(B^3). \nonumber \end{align} Formula \eqref{formpd} is the main result of Section \ref{derivodens}. Since $A$, $C$, and $D$ contain $\hat{P}_{12}$, all the spatial information is embedded in these coefficients. In particular, we found the lowest order (in $B$) contribution of hydrodynamic interactions to the $P_{\bf d}({\bf d})$ occurs at $O(B^2)$. In the following section, the contribution of hydrodynamic interactions to the effective viscosity is computed as well as the change in the effective normal stress coefficients. The combination of these two quantities will describe the total effect of hydrodynamic interactions on the rheological behavior of the bacterial suspension. \section{\bf Explicit formula for the effective viscosity}\label{evform} Using the expression for the orientation distribution, $P_{\bf d}({\bf d})$ defined in \eqref{formpd}, and the formula for the effective viscosity for dipoles in a suspension \eqref{evsep}, we compute the contribution to the effective viscosity due to interactions: \begin{equation}\label{eqn:evformula} \eta^{\text{int}}:=\frac{\eta-\eta_0}{\eta_0} = -\frac{U_0^2B^2\rho^2 \hat{A}}{75\gamma^2 \pi \eta_0}< 0. \end{equation} where $\hat{A} = \frac{1}{N^2}A \sim O (1)$ and the equality holds up to order $O(B^3)$. The quantity $\eta^{\text{int}}$ behaves like $\rho^2$ in concentration (cf. \cite{BatGre72} where an expansion for the effective viscosity to order two in concentration is derived for passive spheres corresponding to pairwise interactions). As an additional check of consistency, consider the dimensions of the final quantity. The dipole moment $[U_0] = \frac{\text{kg} \cdot \text{m}^2}{\text{s}^2}$, both the Bretherton constant $B$ and $\hat{A}$ are dimensionless, the concentration/number density $[\rho] = \frac{1}{\text{m}^3}$, the ambient viscosity $[\eta_0] = \frac{\text{kg}}{\text{m}\cdot \text{s}}$, and the strain rate $[\gamma] = \frac{1}{\text{s}}$ resulting in $\eta^{\text{int}}$ being dimensionless. In addition, the orientation distribution $P_{\bf d}({\bf d})$ from \eqref{formpd} can be used to compute the effective first and second dipolar normal stress coefficients $N_{12} = \frac{\Sigma^d_{11} - \Sigma^d_{22}}{\gamma^2}$ and $N_{23} = \frac{\Sigma^d_{22} - \Sigma^d_{33}}{\gamma^2}$ to investigate the effect of hydrodynamic interactions. The main advantage of the mathematical model is that the computation of the effective normal stress coefficients is straightforward in contrast to experiment where its measurement can be quite complicated \cite{FriHey88}. These coefficients can provide important information about the suspension. For example, the ratio of the first normal stress to the viscosity determines the effective relaxation time \cite{FriHey88}. Also, phenomena such as extrudate swelling \cite{AbdHasBir74} and secondary flow \cite{RamLei08} are important in many technological applications. A simple calculation shows that \begin{eqnarray} N_{12} &=& \frac{\Sigma^d_{11} - \Sigma^d_{22}}{\gamma^2} = \frac{U_0\rho}{\gamma^2}\left[-\frac{2}{5} - \frac{2U_0\rho(C + 12D)}{75\gamma\pi\eta_0}B^2\right]\label{eqn:normstress1-1}\\ N_{13} &=& \frac{\Sigma^d_{22} - \Sigma^d_{33}}{\gamma^2} = \frac{U_0\rho}{\gamma^2}\left[\frac{1}{5} + \frac{U_0\rho(C + 12D)}{75\gamma\pi\eta_0}B^2\right].\label{eqn:normstress2-1} \end{eqnarray} The approximations are valid for $B \ll 1$, so for pushers ($U_0 < 0$) $N_{12} > 0$ and $N_{23} < 0$ where as for pullers ($U_0 > 0$) $N_{12} < 0$ and $N_{23} > 0$. Both results are consistent with the predictions in \cite{Haines3,Saintillan2} while providing additional information about the concentration dependence. The effective normal stress coefficients grow linearly with concentration in the presence of interacting bacteria; however, the fact that the normal stresses of active suspensions are non-zero in the case of a planar shear flow indicate the {\it emergence of non-Newtonian behavior}. One sees in \eqref{eqn:normstress1-1}-\eqref{eqn:normstress2-1} that as the shear rate $\gamma \to \infty$ the normal stresses approach zero indicating the dominance of the background flow on the suspension overwhelming any contribution from interactions. \subsection{\bf Mechanisms required for the decrease in the effective viscosity}\label{mechev} In this subsection, the mechanisms that lead to a decrease in the effective viscosity are investigated. These same mechanisms are shown in \cite{RyaSokBerAra13} to be responsible for collective motion and large scale structure formation in suspensions of pushers. Our mathematical analysis provides insight beyond experiment. Formula \eqref{eqn:evformula} reveals that elongation of bacteria, self-propulsion, and interactions are all required to observe a decrease in the effective viscosity; namely, for spherical bacteria $(B = 0)$ the net change in the effective viscosity is zero. In addition, active bacteria are required, since $U_0 \sim f_p = 0$ results in no change in the effective viscosity where $f_p$ is the propulsion force. Finally, if the spatial density $P_{\bf x}({\bf x})$ is near uniform, then $\hat{A} = \frac{1}{2N^2}\int \sin^2(2\theta) \hat{P}_{12}^2 d{\bf k} \approx 0$ resulting in no change in the effective viscosity. In the limit $\gamma \to \infty$ the contribution to motion of bacteria due to shear dominates the contribution due to interactions with $P_{\bf d}({\bf d})$ maximized at $\alpha = \pi/2$ and $\beta = \pi/2$ (alignment with $y$-axis). This is analogous to the passive case where bacteria in a planar shear flow tend to align with the direction where the fluid exerts the least amount of torque on the bacterium body. Therefore, confirming our main conclusion that in order to exhibit a decrease in the effective viscosity active, elongated bacteria whose interactions result in a non-uniform distribution in space are needed. \subsection{\bf Effective noise conjecture}\label{effnoise} In this subsection, the results herein involving a semi-dilute suspension of point force dipoles are compared to the previous result for a dilute suspension of prolate spheroids with propulsion modeled as a point force \cite{Haines3}. Thus, the only contribution to bacterial motion is the background flow. In \cite{Haines3}, finite size bacteria are taken as spheroids with a point force ($\delta$ function) accounting for self-propulsion. In addition, each bacterium experiences a random reorientation referred to as tumbling. Biologically tumbling corresponds to a reorientation of a bacterium in hopes of finding a more favorable (nutrient rich) environment. Typically in experiment this is observed when the concentration of oxygen is low. Thus, bacteria enter a more dormant state resulting in a lower swimming speed and an increased tumbling rate \cite{SokAra12}. Since only the term containing $\hat{A}$ contributes to the effective viscosity, one can choose to match the coefficient of this term \begin{align*} P_{\bf d}^{int}&= \frac{1}{4\pi} -\frac{3}{8\pi}B\cos(2\alpha)\sin^2\beta +\frac{3}{16\pi}B^2\sin^4\beta\cos(4\alpha)\\ \quad & -U_0\rho\frac{C + 12D}{40\gamma\pi \eta_0}B^2\sin^2\beta\cos(2\alpha) -\frac{ U_0\rho \hat{A}}{40\gamma \pi \eta_0} B^2\sin^2\beta\sin(2\alpha)+ O(B^3) \end{align*} with the corresponding coefficient in the derivation by {\it Haines et al.} \cite{Haines3}, which is quadratic in the diffusion strength $D$. To make the formulas for the effective viscosity identical, the strength of the effective noise/diffusion (tumbling) is chosen to be \begin{equation*} \hat{D} := \frac{-15\eta_0\gamma^2 + \sqrt{225\eta_0^2\gamma^4 - \hat{A}^2B^2\gamma^2\rho^2U_0^2}}{12\hat{A}B\rho U_0} > 0, \end{equation*} (since $U_0 < 0$ for pushers). Observe that $\hat{D}$, chosen in this way, depends only on the physical parameters present in the problem and the same effective viscosity as the dilute case studied in \cite{Haines3} is found. This $\hat{D}$ is referred to as the \emph{effective noise} and the phenomenon where stochasticity arises from a completely deterministic system is called \emph{self-induced noise}. A future work may seek to explain this phenomenon rigorously using mathematical analysis. One heuristic idea is that the periodic (deterministic) Jeffrey orbits are destroyed by interactions resulting in stochastic behavior. Some conclusions about this effective noise can be made that ensure its consistency with physical reality. As bacteria become spheres $B \to 0, \hat{D} \to 0$ resulting in no change in the effective viscosity consistent with \cite{Haines3}. Also as the strain/shear rate $\gamma \to \infty, \hat{D} \to 0$. This is physically intuitive, because as the shear rate becomes large its contribution dominates that due to hydrodynamic interactions resulting in behavior that resembles that of a passive suspension. Thus, the contribution to the effective viscosity due to hydrodynamic interactions is zero. Finally, we compare our results with direct simulations for the coupled PDE/ODE system composed of Stokes PDE \eqref{stokeseqnsingle} and \eqref{rij-0}-\eqref{dij}. \subsection{\bf Comparison to numerical simulations}\label{compsim} In this section, the accuracy of the derived formula is tested by comparing it to recent numerical simulations. The numerical procedure is outlined in \cite{RyaHaiBerZieAra11}. These simulations are parallel in nature allowing them to be carried out on GPUs for greater efficiency. \begin{figure}[h!] \centering \includegraphics[width=2.5in]{fig1.pdf} \caption{\footnotesize Comparison of the formula for the effective viscosity with numerical simulations as bacterium shape changes through the Bretherton constant $B$ for a fixed volume fraction $\Phi = .02$ and shear rate $\gamma = .1$. The vertical bars represent the error in the numerical approximation. Error in the analytical solutions comes from the numerical estimation of $\hat{A}$.} \label{fig2a-1} \end{figure} Figure~\ref{fig2a-1} shows how both the formula and numerical computations of viscosity change with bacterium shape as all other system parameters remain fixed. Here shape is accounted for through the Bretherton constant $B = \frac{b^2 - a^2}{b^2+a^2}$ where $b$ is the length of the major axis and $a$ is the length of the minor axis of the ellipsoid representing a bacterium. First, notice that in both the formula and numerics the contribution to the effective viscosity due to hydrodynamic interactions decreases with $B$ (increasing in magnitude). This is due to the fact that as bacteria become more asymmetrical as $B \to 1$ the inter-bacterial hydrodynamic interactions have a greater effect on alignment. This alignment increases the magnitude of the dipolar stress leading to an even bigger decrease in the effective viscosity. The agreement between the analytical formula and numerical simulations breaks down as $B$ becomes large, but this is expected due to the fact that the asymptotic formula is valid in the regime where $B \ll 1$ (small non-sphericity). Figure~\ref{fig2a-2} shows how both the formula and numerical computations of viscosity change with the concentration of the suspension as all other system parameters remain fixed. It is seen that as concentration increases the effective viscosity decreases. This can easily be explained by the fact that as the concentration increases, the motion of bacteria begins to be dominated by inter-bacterial hydrodynamic interactions. This leads to collective motion of the bacteria in the suspension, which subsequently decreases the viscosity. The two results begin to diverge near volume fraction $\Phi \approx .02$. The reason the numerical simulations do not decrease as much is that collisions are taken into account. It was shown in \cite{RyaHaiBerZieAra11} that the stress due to collisions is a positive contribution to the effective viscosity that is not captured by the formula. This contribution begins to become important beyond the dilute regime ($\Phi > 2\%$). \begin{figure}[h!] \centering \includegraphics[width=2.5in]{fig2.pdf} \caption{\footnotesize Comparison of the formula for the effective viscosity with numerical simulations as the volume fraction $\Phi$ changes for a fixed shape $B = .2$ and shear rate $\gamma = .1$.} \label{fig2a-2} \end{figure} Figure~\ref{fig2a-3} shows how both the formula and numerical computations of viscosity change with the shear rate of the background flow in the suspension as all other system parameters remain fixed. As expected when the shear rate is large in both the analytical formula and simulations, the decrease in viscosity due to hydrodynamic interactions is negligible. This is due to the fact that the background flow dominates motion of bacteria wiping out the effects of inter-bacterial interactions and stopping any collective structures from forming. When the shear rate is too small the effective viscosity becomes unbounded. This makes sense given that at small shear rate the system becomes almost non-dissipative and thus the effective viscosity is not well-defined. This can easily be seen by noting that the viscosity is the ratio of the stress over the strain and when the strain is essentially zero the effective viscosity becomes unbounded. All three plots show good qualitative agreement with each other, experimental observation, and physical intuition. \begin{figure}[h!] \centering \includegraphics[width=2.5in]{fig3.pdf} \caption{\footnotesize Comparison of the formula for the effective viscosity with numerical simulations as the shear rate $\gamma$ changes for a fixed volume fraction $\Phi = .02$ and shape $B=.2$.} \label{fig2a-3} \end{figure} \section{\bf Global solvability of the kinetic equation}\label{kin_solvable} In this section, we study solvability of the main nonlinear integro-differential equation \eqref{cons2} governing the evolution of the orientation distribution. Primarily we are interested in existence, uniqueness, and the regularity properties of solutions of \eqref{cons2}. First, we note that \eqref{cons2} is an equation of the form: \begin{equation}\label{fpeqn-nodiff} \hspace{-.3in}\partial_t P_{\bf d} = -\nabla_{{\bf d}} \cdot \left(\left[ \int_{\mathcal{S}^2} K({\bf d}, {\bf d}')P_{{\bf d}}({\bf d}') dS_{{\bf d}'}+{\bf k}({\bf d})\right] P_{{\bf d}}\right)+ D\Delta_{{\bf d}} P_{{\bf d}}. \end{equation} Indeed, one can obtain \eqref{cons2} by substituting \begin{eqnarray} K({\bf d}, {\bf d}')=\boldsymbol{\omega}({\bf d},{\bf d}') + B{\bf E}({\bf d},{\bf d}'),\;\; {\bf k}({\bf d})= \boldsymbol{\omega}^{BG}({\bf d}) + B{\bf E}^{BG}({\bf d}).\label{def_of_k} \end{eqnarray} Both $K$ and ${\bf k}$ from \eqref{def_of_k} are infinitely smooth functions of ${\bf d}$. Therefore, in this section we consider \eqref{fpeqn-nodiff} for the general case of smooth $K$ and ${\bf k}$. We follow the standard procedure for the analysis of the well-posedness of the evolution PDEs (e.g., see \cite{Eva98,Lio69,FroLiu12}). In particular, we introduce the notion of a weak solution. By $H^s$ ($s\in \mathbb R$) we denote the corresponding Sobolev spaces. \begin{definition} For $T >0$, the function $f$ which belongs to space $\mathcal{H}$ given by \begin{equation} \label{weak_class} \mathcal{H}=L^2((0,T), H^1(\mathcal{S}^2)) \cap H^1((0,T),H^{-1}(\mathcal{S}^2)) \end{equation} is {\it a weak solution} of \eqref{fpeqn-nodiff} if for almost all $t \in [0,T]$ and all $h \in H^{1}(\mathcal{S}^2)$ \begin{equation}\label{eqnc} \langle \partial_t f, h \rangle = -D\langle \nabla_{\bf d} f, \nabla_{\bf d} h \rangle + \langle f, \left[\int_{\mathcal{S}^2} K({\bf d}, {\bf d}')f dS_{{\bf d}'} + {\bf k}({\bf d})\right] \cdot \nabla_{\bf d} h \rangle, \end{equation} where $\langle \cdot, \cdot \rangle$ is the duality product for distributions on the unit sphere $\mathcal{S}^2$ \end{definition} \begin{remark} According to the well-known embedding (see \cite{Sim87}) the fact that a weak solution $f$ belongs to $\mathcal{H}$ implies that it is continuous with respect to $t\in [0,T]$ with values in $L^2(\mathcal{S}^2)$, i.e., $f\in C([0,T];L^2(\mathcal{S}^2))$. \end{remark} \begin{definition} A function $f\in C([0,T];L^2(\mathcal{S}^2))$ is called {\it positive in distributional sense} if \begin{equation} \langle f,h\rangle \geq 0 \end{equation} for all $t \in [0,T]$ and all $h \in C(\mathcal{S}^2)$ such that $h({\bf d})\geq 0$ for all ${\bf d}\in \mathcal{S}^2$. \end{definition} The following theorem is the main result of this section. \begin{theorem}\label{thm_galerkin} Assume $f_0\in L^2(\mathcal{S}^2)$, $K\in C^2(\mathcal{S}^2\times \mathcal{S}^2)$, ${\bf k}\in C^2(\mathcal{S}^2)$ and $T>0$. Assume also that $f_0$ is positive in the distributional sense. Then the following statements hold: \begin{list}{}{} \item [{\it (i)}] There exists the unique weak solution of \eqref{fpeqn-nodiff} $f$ on interval $[0,T]$ such that $f|_{t=0}=f_0$. The weak solution $f$ is positive. It continuously depends on initial conditions, i.e., there exists a positive constant $C>0$ such that \begin{equation} \label{cont_dep_on_ic} \sup\limits_{t\in[0,T]} \|f^{(1)}-f^{(2)}\|_{L^2(\mathcal{S}^2)}\leq C \|f^{(1)}_0-f^{(2)}_0\|_{L^2(\mathcal{S}^2)}, \end{equation} where $f^{(1)}$ and $f^{(2)}$ are weak solutions with initial conditions $f^{(1)}|_{t=0}~=~f^{(1)}_0$ and $f^{(2)}|_{t=0}=f^{(2)}_0$, respectively. \medskip \item [{\it (ii)}] For all $s\geq 0$ if $f_0\in H^s(\mathcal{S}^2)$, then $f\in C([0,T];H^{s}(\mathcal{S}^2))$. \\If $f_0~\in~ C^{\infty}(\mathcal{S}^2)$, then $f~\in~ C([0,T];C^{\infty}(\mathcal{S}^2))$. \medskip \item[{\it (iii)}] For all $s\geq 0$ if $f_0\in H^s(\mathcal{S}^2)$, then for all $m \geq 0$ and $t>0$: \begin{equation}\label{regul_ineq} \| f(t)\|^2_{H^{s+m}(\mathcal{S}^2)} \leq C\left(1 + \frac{1}{t^m}\right), \end{equation} where the constant $C$ depends only on $\| f_0\|_{H^s(\mathcal{S}^2)}$, $s$, and $m$. In particular, $$f\in C((0,\infty); H^p(\mathcal{S}^2))$$ for all $p\in \mathbb Z$. \end{list} \end{theorem} \begin{proof} $\;$\\ \noindent{STEP 0.} ({\it Preliminaries}) Consider spaces of functions with mean zero: \begin{equation*} \dot{L}^2(\mathcal{S}^2):=L^2(\mathcal{S}^2)\cap\left\{f:\langle f ,1\rangle=0\right\}\;\; \dot{H}^s(\mathcal{S}^2):=H^s(\mathcal{S}^2)\cap\left\{f:\langle f ,1\rangle=0\right\}. \end{equation*} Note that for $f\in L^1(\mathcal{S}^2)$ \begin{equation*} \langle f ,1\rangle=\int_{\mathcal{S}^2}f d S_{\bf d}. \end{equation*} We use $\|\nabla_{\bf d}f\|_{L^2(\mathcal{S}^2)}$ as a norm in $\dot{H}^1(\mathcal{S}^2)$. In this proof we assume that $\int_{\mathcal{S}^2} f_0 dS_{\bf d}=1$. Consider the ``mean zero" component of the solution $f$; namely, $g:=f-\frac{1}{4\pi}$. If $f$ is the weak solution of \eqref{fpeqn-nodiff}, then $g$ satisfies \begin{eqnarray} \frac{d}{dt}\langle g, h\rangle &=& -D\langle \nabla_{\bf d} g, \nabla_{\bf d} h \rangle + \langle \frac{1}{4\pi} + g, \int_{\mathcal{S}^2} K({\bf d},{\bf d}')g ({\bf d}')dS_{{\bf d}'} \cdot \nabla_{{\bf d}} h\rangle \nonumber \\&&+ \langle \frac{1}{4\pi}+g, \left[\int_{\mathcal{S}^2} K({\bf d},{\bf d}') dS_{{\bf d}'} +{\bf k}({\bf d})\right] \cdot \nabla_{{\bf d}} h \rangle \label{eqn_for_g} \end{eqnarray} for all $h\in {H}^1(\mathcal{S}^2)$. Existence, uniqueness, and continuous dependence on initial conditions will be proven for $g$, which is equivalent to the proof of the same properties for $f$. Below $C$ denotes a positive constant and it may change from line to line. \medskip \noindent{STEP 1.} ({\it Local existence}) Let $E_N$ be the space spanned by the first $N$ eigenvalues of the Laplace-Beltrami operator $\Delta_{\bf d}$, and let $\Pi_N$ be the orthogonal projector on the space $E_N$. Introduce the Galerkin approximation $g^N$, which is the solution of the following equation: \begin{eqnarray} \frac{d}{dt}\langle g^N, h\rangle &=& -D\langle \nabla_{\bf d} g^N, \nabla_{\bf d} h \rangle + \langle \frac{1}{4\pi} + g^N, \int_{\mathcal{S}^2} K({\bf d},{\bf d}')g^N ({\bf d}')dS_{{\bf d}'} \cdot \nabla_{{\bf d}} h\rangle \nonumber \\&&+ \langle \frac{1}{4\pi}+g, \left[\int_{\mathcal{S}^2} K({\bf d},{\bf d}') dS_{{\bf d}'} +{\bf k}({\bf d})\right] \cdot \nabla_{{\bf d}} h \rangle, \label{eqn_for_gN} \end{eqnarray} for all $h\in E_N$, and $g^N|_{t=0}=\Pi_N g_0$, where $g_0:=f_0-\frac{1}{4\pi}$. In a standard manner, the problem \eqref{eqn_for_gN} can be interpreted as a system of $N$ ODEs, and its solution $g^N$ exists for $t\in [0,t_N)$ for some $t_N>0$. Taking $h=g^N$ in \eqref{eqn_for_gN}, using the Cauchy inequality, and the boundedness of $K$ and ${\bf k}$ we obtain \begin{equation}\label{ineq_1_gN} \frac{d}{dt}\|g^N\|^2_{L^2(\mathcal{S}^2)}\;+\;D\|g^N\|^2_{H^1(\mathcal{S}^2)}\leq C \left(1+\|g^N\|^4_{L^2(\mathcal{S}^2)}\right). \end{equation} In the inequality \eqref{ineq_1_gN} the constant $C$ does not depend on $N$. This implies that $g^N$ exists for $0\leq t\leq t_0$ where $t_0$ may be chosen independently from $N$, and \begin{equation}\label{bound_on_gN} \|g^N(t)\|_{L^2(\mathcal{S}^2)}\leq C, \;\;0\leq t\leq t_0, \end{equation} The bound \eqref{bound_on_gN} gives that the RHS of \eqref{ineq_1_gN} is estimated by a constant independent from $N$. Then by integrating \eqref{ineq_1_gN} in $t$ we get \begin{equation}\label{boundH1_on_gN} \int_0^{t_0} \|g^N\|^2_{H^1(\mathcal{S}^2)}dt\leq C. \end{equation} Take $h \in L^2(0,t_0; \dot{H}^1(\mathcal{S}^2))$ in \eqref{eqn_for_gN}, integrate in $t$, and use the Cauchy inequality, $\langle u,v \rangle\leq C \|u\|_{H^1(\mathcal{S}^2)}\|v\|_{H^{-1}(\mathcal{S}^2)}$, and the Minkovsky inequality to obtain \begin{align*} \int_0^{t_0} \langle \partial_t g^N, h\rangle dt \leq C\left[\int_0^{t_0} \| h\|_{H^{1}(\mathcal{S}^2)}^2dt\right]^{1/2}.\nonumber \end{align*} Therefore, \begin{equation}\label{bound_on_gtN} \int_0^{t_0} \| \partial_t g^N\|^2_{H^{-1}(\mathcal{S}^2)} dt \leq C. \end{equation} From bounds \eqref{bound_on_gN}, \eqref{boundH1_on_gN}, \eqref{bound_on_gtN} and the following relation which holds for all $g,h$ from $\mathcal{H}$ \begin{equation*} \int_0^{t_0} \langle \partial_t g, h\rangle= -\int_0^{t_0} \langle g,\partial_t h\rangle dt +\langle g(t_0), h(t_0)\rangle - \langle g(0), h(0)\rangle, \end{equation*} we obtain that there exists $g\in\mathcal{H}$ such that (up to a subsequence) \begin{eqnarray} &g^N \rightharpoonup g& \text{ in }L^{\infty}(0,t_0; L^2(\mathcal{S}^2))\cap L^{2}(0,t_0; H^1(\mathcal{S}^2)),\label{weak_conv_g}\\ &\partial_t g^N \rightharpoonup g& \text{ in } L^{2}(0,t_0; H^{-1}(\mathcal{S}^2)).\label{weak_conv_gt} \end{eqnarray} In particular, weak convergences in \eqref{weak_conv_g} and \eqref{weak_conv_gt} imply strong convergence in $C([0,t_0];L^2(\mathcal{S}^2))$. Thus, $$g|_{t=0}=\lim_{N\to\infty } g^N|_{t=0}=\lim_{N\to \infty} \Pi_N g_0=g_0, $$ and \begin{equation}\label{strong_conv_K} \int_{\mathcal{S}^2} K({\bf d},{\bf d}')g^N ({\bf d}')dS_{{\bf d}'} \rightarrow \int_{\mathcal{S}^2} K({\bf d},{\bf d}')g ({\bf d}')dS_{{\bf d}'} \text{ in } C([0,t_0];L^2(\mathcal{S}^2)). \end{equation} To complete the proof of local existence we need to show that $g$ solves \eqref{eqn_for_g}. To this end, consider \eqref{eqn_for_gN} with $h=w(t)h_0$, where $h_0\in E_M$, $M<N$ and $w(t)$ is arbitrary smooth function of one argument $t$. Integrate this equation in $t$ over the interval $(0,t_0)$, and pass to the limit $N\to \infty$ ($M$ is fixed) using \eqref{weak_conv_g}, \eqref{weak_conv_gt} and \eqref{strong_conv_K}. Since $w(t)$ is arbitrary we obtain that \eqref{eqn_for_g} is satisfied for all $h_0$ from the space $\cup_{M} E_M$ which is dense in $\dot{H}^1(\mathcal{S}^2)$. Therefore, $g$ solves \eqref{eqn_for_g} for all $h \in \dot{H}^1(\mathcal{S}^2)$. Thus, we constructed a function $g$ that is a weak solution of \eqref{eqn_for_g} defined on the time interval $0\leq t\leq t_0$. \medskip \noindent{STEP 2.} ({\it Uniqueness \& continuous dependence on initial conditions}) \\ Consider $g^{(1)}$ and $g^{(2)}$, weak solutions of \eqref{eqn_for_g} defined on the time interval $[0,t_0]$ with initial data $g^{(1)}_0$ and $g^{(2)}_0$, respectively. For both $i=1$ and $i=2$ if one substitutes $h=g^{(i)}$ into the equation \eqref{eqn_for_g} written for $g^{(i)}$, one obtains by using the same arguments as for \eqref{bound_on_gN} that \begin{equation}\label{gi_bound} \sum_{i=1,2}\|g^{(i)}\|^2_{L^2(\mathcal{S}^2) < C,\;\;0\leq t\leq t_0, \end{equation} where the constant $C$ depends on initial data $g^{(i)}_0$ and the parameter $D$ only. By subtracting equation \eqref{eqn_for_g} written for $g^{(2)}$ from equation \eqref{eqn_for_g} written for $g^{(1)}$ we get the following equality \begin{eqnarray} \langle \partial_t u, h\rangle &=& -D\langle \nabla_{{\bf d}} u, \nabla_{{\bf d}} h\rangle + \langle \left[\int_{S^2} K({\bf d},{\bf d}') u dS_{{\bf d}'}\right], \nabla_{{\bf d}} h\rangle \nonumber \\&&+ \langle u, \left[\int_{S^2} K({\bf d},{\bf d}') dS_{{\bf d}'} + {\bf k}\right]\nabla_{{\bf d}} h \rangle\nonumber\\& & + \langle u, \left[\int_{S^2} K({\bf d},{\bf d}') g^{(1)} dS_{{\bf d}'}\right] \nabla_{{\bf d}} h\rangle \nonumber \\&&+\langle g^{(2)}, \left[\int_{S^2} K({\bf d},{\bf d}') u dS_{{\bf d}'} \right]\nabla_{{\bf d}} h \rangle.\label{eqn4} \end{eqnarray} By taking $h=u$, using the Cauchy inequality, and \eqref{gi_bound} we obtain \begin{equation*} \frac{d}{dt}\|u\|^2_{L^2(\mathcal{S}^2)} +D \|u\|^2_{H^1(\mathcal{S}^2)}\leq C\|u\|^2_{L^2(\mathcal{S}^2)}. \end{equation*} This inequality implies that $\|u(t)\|^2_{L^2(\mathcal{S}^2)}\leq e^{Ct}\|u(0)\|^2_{L^2(\mathcal{S}^2)}$, and, thus, \begin{equation}\label{g_cont_in_ic} \|g^{(1)}(t)-g^{(2)}(t)\|_{L^2(\mathcal{S}^2)}<e^{Ct} \|g^{(1)}_0-g^{(2)}_0\|_{L^2(\mathcal{S}^2)}. \end{equation} Again, the constant $C$ depends on initial data $g^{(i)}_0$ and the parameter $D$ only. The inequality \eqref{g_cont_in_ic} implies that a weak solution of \eqref{eqn_for_g} continuously depends on the initial data. In particular, uniqueness holds: if $g^{(1)}_0=g^{(2)}_0$, then from \eqref{g_cont_in_ic} it follows that the corresponding solutions $g^{(1)}$ and $g^{(2)}$ coincide. \medskip \noindent{STEP 3.} ({\it Regularity of weak solutions})\\ Consider a weak solution $g$ and assume $g_0\in \dot{H}^{s}(\mathcal{S}^2)$ that $s\in \mathbb Z_+$. Such a weak solution exists due to STEP 1, and it can be approximated by Galerkin approximations $g^N$ which follows from uniqueness proved in STEP 2. By substituting $h=(-\Delta_{\bf d})^{s}g^N$ into the equation \eqref{eqn_for_gN}, using the Cauchy inequality and \eqref{bound_on_gN} we obtain \begin{equation}\label{ineq_g_s} \frac{d}{dt}\|g^N\|^2_{H^s(\mathcal{S}^2)}+D \|g^N\|^2_{H^{s+1}(\mathcal{S}^2)}\leq C(\|g^N\|^2_{H^s(\mathcal{S}^2)}+1), \end{equation} where the constant $C$ depends on $\|g_0\|_{L^2(\mathcal{S}^2)}$, $\|g_0\|_{H^s(\mathcal{S}^2)}$ and the parameter~$D$. In the same manner as for \eqref{bound_on_gN}, \eqref{boundH1_on_gN} and \eqref{bound_on_gtN} it follows from \eqref{ineq_g_s} that \begin{eqnarray*} g^N && \text{ is bounded in }L^2 (0,t_0; H^{s+1}(\mathcal{S}^2))\cap L^{\infty} (0,t_0; H^{s}(\mathcal{S}^2)),\\ \partial_t g^N && \text{ is bounded in }L^2 (0,t_0; H^{s-1}(\mathcal{S}^2)). \end{eqnarray*} Hence, $g\in L^2 (0,t_0; H^{s+1}(\mathcal{S}^2))\cap L^{\infty} (0,t_0; H^{s}(\mathcal{S}^2)) \cap H^1(0,t_0; H^{s-1}(\mathcal{S}^2))$. The standard embedding theorem (e.g., from \cite{Sim87}) implies $g\in C([0,t_0];H^{s}(\mathcal{S}^2))$. \medskip \noindent{STEP 4.} ({\it Positivity of weak solutions})\\ Consider $f=\frac{1}{4\pi}+g$, a weak solution of \eqref{fpeqn-nodiff}. Assume first $f_0\in H^4(\mathcal{S}^2)$ and $f_0({\bf d})\geq 0$. Then $f$ belongs to $C([0,t_0];C^2(\mathcal{S}^2))$, and thus $f$ is a classical solution of~\eqref{fpeqn-nodiff}: \begin{equation*} \partial_t f= D\Delta_{\bf d} f-F\cdot \nabla_{\bf d} f-(\nabla_{\bf d} \cdot F)f, \end{equation*} where $F({\bf d}):=\int_{\mathcal{S}^2}K({\bf d},{\bf d}')f({\bf d}')dS_{{\bf d}'}+{\bf k}({\bf d})\in C([0,t_0];C^1(\mathcal{S}^2))$. Consider $\tilde{f}:=fe^{\omega t}$, where $\omega:=\max\limits_{[0,t_0]\times \mathcal{S}^2}|\nabla_{d}\cdot F|$. Then $\tilde{f}$ solves the following equation \begin{equation*} \partial_t \tilde{f}= D\Delta_{\bf d} \tilde{f}-F\cdot \nabla_{\bf d} \tilde{f}+(\omega-\nabla_{\bf d} \cdot F)\tilde{f}. \end{equation*} Since $\omega-\nabla_{\bf d} \cdot F\geq 0$ the weak maximum principle for parabolic equations applies for $\tilde{f}$, and, thus, $f\geq 0$. Consider the case of $f_0\in L^2(\mathcal{S}^2)$, which is positive in the distributional sense. Then we can approximate $f_0$ by positive $f^N_0\in H^4(\mathcal{S}^2)$ in the space $L^2(\mathcal{S}^2)$. Denote by $f^N$ solutions of \eqref{fpeqn-nodiff} with initial data $f^N_0$. Then by \eqref{g_cont_in_ic} we can pass to the limit $N\to\infty$ in the inequality \begin{equation*} \langle f^N(t), h\rangle \geq 0 \end{equation*} for all $0\leq t \leq t_0$ and $h\in C(\mathcal{S}^2)$. Thus, the function $f$, which is the solution of \eqref{fpeqn-nodiff} with initial data $f_0$, is positive at least in the distributional sense. \medskip \noindent{STEP 5.} ({\it Global existence})\\ Consider $f_0=\frac{1}{4\pi}+g_0\in L^2(\mathcal{S}^2)$, which is positive in the distributional sense. Functions $f$ and $g$ are weak solutions of \eqref{fpeqn-nodiff} and \eqref{eqn_for_g}, respectively. We want to prove in this step that the time interval on which $f$ and $g$ are defined can be extended from $[0,t_0]$ to $[0,T]$ for any given $T>0$. First, observe that \begin{equation*} \int_{\mathcal{S}^2}f(t) dS_{\bf d}=\int_{\mathcal{S}^2} f_0dS_{\bf d}=1. \end{equation*} From the equality above and positivity of $f$ established in STEP 4 we obtain \begin{equation*} \|f(t)\|_{L^1(\mathcal{S}^2)}=1. \end{equation*} In particular, since $|g|\leq |f|+\frac{1}{4\pi}$ we have \begin{equation}\label{aux_ineq_for_global} \int_{\mathcal{S}^2}K({\bf d},{\bf d'})g({\bf d}')dS_{{\bf d}'}\leq {C} (\|f(t)\|_{L^1(\mathcal{S}^2)}+1)=2C. \end{equation} Substitute $h=g$ into \eqref{eqn_for_g}, use the Cauchy inequality and \eqref{aux_ineq_for_global} to obtain \begin{equation*} \frac{d}{dt}\|g\|_{L^2(\mathcal{S}^2)}^2 + D\|g\|_{H^1(\mathcal{S}^2)}^2\leq C \left(\|g\|_{L^2(\mathcal{S}^2)}^2+1\right). \end{equation*} Then the $L^2$-norm of the weak solution is bounded on all bounded time intervals $[0,T]$: $$\max\limits_{0\leq t\leq T}\|g(t)\|_{L^2(\mathcal{S}^2)}^2<C(e^{CT}+1).$$ Thus, global existence follows. \medskip \noindent{STEP 6.} ({\it Instantaneous regularity})\\ Consider positive $f_0\in H^s(\mathcal{S}^2)$ and the corresponding weak solution $f=\frac{1}{4\pi}+g$ of \eqref{fpeqn-nodiff}. According to STEP 3 $f\in L^2([0,T];H^{s+1}(\mathcal{S}^2))$ and, thus, $f\in H^{s+1}(\mathcal{S}^2)$ for almost all $t>0$. Hence, there exists $\tilde{t}>0$ arbitrarily close to $0$ such that $f(\tilde{t})\in H^{s+1}(\mathcal{S}^2)$. Then by uniqueness and STEP 3, $f\in C([\tilde{t},T];H^{s+1}(\mathcal{S}^2))$. We can choose $\tilde{t}$ arbitrarily small and $T$ arbitrarily large (due to global existence proved in STEP 5). By repeating the same arguments for $s+1$, $s+2$, and so on we get \begin{equation*} f\in C(0,+\infty;H^{p}(\mathcal{S}^2)) \end{equation*} for all $p\in \mathbb Z$. Next we prove \eqref{regul_ineq} by induction with respect to $m$. Substitute $h=(-\Delta_{\bf d})^{s}g+t(-\Delta_{\bf d})^{s+1}g$ for $t>0$ in \eqref{eqn_for_g} and use the Cauchy inequality to obtain \begin{eqnarray* &&\frac{d}{dt}\left(\| g \|^2_{\dot{H}^s(\mathcal{S}^2)} +\frac{D}{2}t\| g \|^2_{\dot{H}^{s+1}(\mathcal{S}^2)}\right)\\ && \hspace{30 pt}+ \frac{D}{2}\left(\| g \|^2_{\dot{H}^{s+1}(\mathcal{S}^2)} + \frac{D}{2}t\| g\|^2_{\dot{H}^{s+2}(\mathcal{S}^2)}\right) \leq C\| g_0\|^2_{H^s(\mathcal{S}^2)}(1+t). \end{eqnarray*} Using the Poincare inequality $\|g\|_{\dot{H}^{s+k}(\mathcal{S}^2)}\leq\|g\|_{\dot{H}^{s+k+1}(\mathcal{S}^2)}$ we obtain \begin{equation*} \| g \|^2_{\dot{H}^s(\mathcal{S}^2)} +\frac{D}{2}t\| g \|^2_{\dot{H}^{s+1}(\mathcal{S}^2)}\leq C\| g_0\|^2_{H^s(\mathcal{S}^2)}(1+t) \end{equation*} Thus, the base of induction is shown \begin{equation}\label{m1} \| g \|^2_{\dot{H}^{s+1}(\mathcal{S}^2)}<C\| g_0\|^2_{H^s(\mathcal{S}^2)}\left(1+\frac{1}{t}\right). \end{equation} Finally, to get the inequality \eqref{regul_ineq} at the order $m+1$ we use the inequality \eqref{regul_ineq} at order $m$ between times $t/2$ and $t$ and \eqref{m1} between times 0 and $t/2$: \begin{align*} \| g(t)\|^2_{H^{s+m+1}(\mathcal{S}^2)} &\leq C\| g\left(\frac{t}{2}\right)\|^2_{H^{s+1}(\mathcal{S}^2)}\left( 1 + \left(\frac{2}{t}\right)^m\right)\\ &\leq C\| g_0\|^2_{H^s(\mathcal{S}^2)}\left(1 + \frac{1}{t^{m+1}}\right). \end{align*} Thus, \eqref{regul_ineq} is proved by induction. \medskip \noindent{STEP 7.} ({\it Proof of Theorem \ref{thm_galerkin}}) \begin{list}{}{} \item{({\it i})} Existence of a weak solution of \eqref{fpeqn-nodiff} for arbitrary $T>0$ is proved in STEP 5. Uniqueness is proved in STEP 2. To prove continuous dependence on initial data on arbitrary time interval $[0,T]$ one needs to repeat all arguments in STEP 2 replacing $t_0$ by $T$. Positivity is proved in STEP 4. \item{({\it ii})} This part is proved in STEP 3, if one replaces $t_0$ by $T$. \item{({\it iii})} This part is proved in STEP 6. \end{list} \rightline{$\square$} \end{proof} \section{\bf Conclusions}\label{conc} In this paper, the derivation of a formula for the effective viscosity formally derived in \cite{RyaHaiBerZieAra11} was made rigorous and an additional term in the asymptotic expansion for the effective viscosity was derived (now up to $O(B^2)$). This formula revealed the physical mechanisms responsible for the decrease in the effective viscosity confirming the prior formal calculation. Namely, hydrodynamic interactions, an elongated body, and self-propulsion are required to observe a decrease. These features are all present in the bacteria {\it Bacillius subtillis} used in the experiments of Aranson et al. \cite{Sokolov2,Sokolov3,Sokolov,SokGolFelAra09,SokAra12}, which motivated this study of the effective viscosity. In addition, an interesting phenomenon was uncovered: the emergence of self-induced noise where a completely deterministic system governed by interactions resembles a random system for certain regimes of the physical parameters. The explicit analytical formula for the effective viscosity derived herein showed good qualitative agreement with simulations and experiment. This paper also establishes the global solvability of solutions to the PDE kinetic equation governing the evolution of the bacterium orientation density. In order to derive the formula for the effective viscosity, the existence of a steady state was assumed and then computed asymptotically. Rigorously proving the convergence to a steady state distribution may be the subject of future work. \section*{Acknowledgment} The authors thank to V.A. Rybalko and I.~S. Aranson for helpful discussions. The work of LB, MP, and SR were supported by DOE Grant DE-FG-0208ER25862. \vspace{0.3 in} \vspace{-0.2 in} \bibliographystyle{spmpsci}
{ "timestamp": "2015-04-14T02:09:57", "yymm": "1504", "arxiv_id": "1504.03030", "language": "en", "url": "https://arxiv.org/abs/1504.03030" }
\section*{Introduction} Since at least \citet{Merton-1971}, many results on portfolio optimization problems have been obtained in a continuous time framework. It is still difficult to solve optimal portfolio problems when there is some degree of predictability in asset returns, \emph{i.e.} when the investment opportunities are time-varying. A great number of papers have proposed to use a VAR model to forecast returns and study its implication for the long-term portfolio choice problem. As a result the academic literature has followed two main directions. The first one relies on mathematical tools and establishes some explicit solutions (see among others \citet{Kim-Omberg-1996}, \citet{Liu-2007} and references therein). The second line of research consists to implement some challenging numerical methods. In fact, \cite{Barberis-2000} developed a discretization state space method which serves as a benchmark. \cite{Brandt-al.-2005}, \cite{Binsbergen-Brandt-2007}, \citet{Garlappi-Skoulakis-2009} among others use some sophisticated backward induction techniques and evaluate the accuracy of their results by comparing them to the discretization state space benchmark. However, all discrete numerical procedures approximate directly or indirectly a highly non linear value function and cannot explicitly separate the so-called \emph{hedging demand} from the so-called \emph{myopic demand}. \citet{Garlappi-Skoulakis-2011} provide a general discussion about approximations accuracy in discrete time. This paper works in continuous time and uses the explicit solution for the portfolio choice problem, then constructs a bridge between continuous and discrete VAR model as in \citet{Campbell-al.-2004}. In fact, these authors provided evidence that there should exist minor discrepancies between results under discrete and continuous time models. Thus, numerical results that we derive from continuous time are indirectly comparable to those of \citet{Garlappi-Skoulakis-2009}. We show that, for large degrees of risk aversion and/or small horizon, when the state variable closes to its unconditional mean, the two numerical results are quite similar. Otherwise, results under our explicit solution in continuous time exhibit some discrepancies with \citet{Garlappi-Skoulakis-2009} when the risk aversion decreases and/or when the time horizon increases. We argue that this is due to large sensitivity of total demand to state variable (Sharpe ratio). The paper is organized as follows. Section 1 exposes the way we map the continuous-time investment opportunity set and the discrete-time one. Section 2 gives some insights on the explicit solution for the long-term investor with CRRA preferences. Section 3 gives some numerical results based on~\citet{Brandt-al.-2005} example. \section{Investment opportunity sets} We first expose the model in a continuous-time framework and in a discrete-time framework to study the impact of a predictable component in stock returns. Next, we show how to recover continuous-time parameters that are consistent with discrete-time VAR estimates. \subsection{Opportunity set in continuous time} We start by assuming that two assets are available for the investor (\citet{Campbell-al.-2004} and \citet{Kim-Omberg-1996} among others). On the one hand, the riskless asset pays back a constant interest rate~$r$ \begin{equation} \frac{ⅆ P^f_t}{P^f_t} = r ⅆt \label{eq:Pf}\\ \end{equation} where~$P^f_t$ denotes the price of this asset at time~$t$. On the other hand, there is a risky asset whose price~$P_t$ satisfies the following diffusion process \begin{equation} \frac{ⅆ P_t}{P_t} = µ_t ⅆt + 𝜎 ⅆB_t^p \\ \end{equation} where~$B_t^p$ denotes a scalar Brownian motion with zero drift and unit variance rate. The drift rate~$µ_t$ follows a diffusion process as well. It is supposed to be time-varying and state variable dependent. The volatility of the risky asset is assumed to be constant. This is not a strong assumption for the long-term investor (see \citet{Campbell-Viceira-2002}). Let~$X_t$ denote the Sharpe ratio i.e. the market price of risk/reward for buying/selling one unit of risky asset \begin{equation} X_t = \frac{µ_t - r}{𝜎} \end{equation} Then the Sharpe ratio is assumed to follow the usual ``Ornstein-Uhenbeck'' diffusion process \begin{equation} ⅆ X_t = 𝜅(𝜃 - X_t) ⅆt + 𝜁 ⅆB_t^x \quad 𝜅, 𝜃, 𝜁 > 0 \end{equation} where~$B_t^x$ denotes another scalar Brownian motion with zero drift and unit variance rate. Parameters~$𝜃$ and~$𝜅$ denote respectively the unconditional mean and the mean reverting parameter of the Sharpe ratio~$X_t$. In fact, parameter~$𝜅$ reflects the rate by which the shocks on Sharpe ratio dissipate and then reverts towards its long-term mean~$𝜃$. Finally, parameter~$𝜁$ denotes the instantaneous volatility of Sharpe ratio. It controls the diffusion rate of the process. The above equations imply that instantaneous return on stocks~$ⅆ P_t/P_t$ follows a diffusion process whose drift is mean-reverting and whose innovations are correlated with those of the market price of risk itself, with the correlation coefficient~$𝜌$. Thus the following equations hold. \begin{align} ⅆ P_t/P_t & = (𝜎 X_t + r) ⅆ t + 𝜎 ⅆB_t^p \label{eq:cont1}\\ ⅆ X_t & = 𝜅(𝜃 - X_t) ⅆt + 𝜁 ⅆB_t^x \label{eq:cont2} \end{align} with~$ⅆB_t^p ⅆB_t^x = 𝜌 ⅆt$. Equations~\eqref{eq:cont1} and~\eqref{eq:cont2} define a joint stochastic process in continuous time. \subsection{Opportunity set in discrete time} The standard model in discrete time is a restricted VAR(1) process which captures predictability of stocks returns (see \citet{Barberis-2000} for instance). We focus on the example analyzed in \citet{Brandt-al.-2005} that was reused in \citet{Binsbergen-Brandt-2007} and in \citet{Garlappi-Skoulakis-2009}. The log excess returns of the risky asset~$𝛥 \ln P_{t+1} - r^f$ are assumed to be predictable by the log dividend-to-price ratio~$z_t$ ($r^f$ denotes the risk-free rate and is equal to 6\% in annualized basis). The joint dynamics of these two variables are specified such that \begin{align} 𝛥 \ln P_{t+1} - r^f & = a_r + b_r z_t + 𝜀^r_{t+1} \label{eq:disc1}\\ z_{t+1} & = a_z + b_z z_t + 𝜀^z_{t+1} \label{eq:disc2} \end{align} with \begin{equation} \begin{pmatrix} 𝜀^r_{t+1}\\𝜀^z_{t+1} \end{pmatrix} \sim N \begin{bmatrix} \begin{pmatrix}0\\0\end{pmatrix}, \begin{pmatrix}𝜎^2_r & 𝜎_{rz} \\ 𝜎_{rz} & 𝜎^2_z\end{pmatrix} \end{bmatrix} \end{equation} In fact, \citet{Campbell-Shiller-1988} forcefully claim that, if returns are predictable, at least, the log dividend-to-price ratio should capture some of that predictability. A substantial long-standing empirical literature has documented many properties of these two regressions. \citet{Brandt-al.-2005} report the following estimated values (using the CRSP U.S. quarterly index from January 1986 to December 1995) \[ b_r = \num{.060} \quad b_z = \num{.958} \quad \frac{𝜎_{rz}}{𝜎_r \, 𝜎_z} = \num{-.941} \] The returns are weakly predictable, the dividend yield is highly persistent and the shocks are strongly negatively related. \subsection{Recovering continuous-time parameters from discrete-time VAR} We closely follow \citet{Campbell-al.-2004} to recover the parameters of the continuous-time system eqs~\eqref{eq:cont1}--\eqref{eq:cont2} from the restricted VAR(1) eqs~\eqref{eq:disc1}--\eqref{eq:disc2}. However, \citet{Campbell-al.-2004} use the risk premium as state variable; we prefer to use the Sharpe ratio. In matrix form, the discrete-time VAR eqs~\eqref{eq:disc1}--\eqref{eq:disc2} is \begin{equation} \begin{pmatrix} 𝛥 \ln P_{t+1} - r^f \\ z_{t+1} \end{pmatrix} = \begin{pmatrix} a_r \\ a_z \end{pmatrix} + \begin{pmatrix} 0 & b_r \\ 0 & b_z \end{pmatrix} \begin{pmatrix} 𝛥 \ln P_t - r^f \\ z_t \end{pmatrix} + \begin{pmatrix} 𝜀^r_{t+1} \\ 𝜀^z_{t+1} \end{pmatrix} \label{eq:disc-matrix} \end{equation} The first step is to aggregate the continuous-time model over a span of time taking point observations at evenly spaced points~$\{ t_0, t_1, …, t_n, t_{n+1}, … \}$, with~$𝛥t = t_n - t_{n-1}$. We then obtain, using the discretization method developed by \citet{Bergstrom-1984} \begin{align} \begin{pmatrix} 𝛥 \ln P╻ - r 𝛥t \\ X╻ \end{pmatrix} = {} &\begin{pmatrix} (-𝜎^2/2 + 𝜎𝜃)𝛥t - (1-ⅇ^{-𝜅𝛥t}) \frac{𝜎𝜃}{𝜅} \\ (1-ⅇ^{-𝜅𝛥t})𝜃 \end{pmatrix} + {}\nonumber\\ &\begin{pmatrix} 1 & (1-ⅇ^{-𝜅𝛥t}) \frac{𝜎}{𝜅} \\ 0 & ⅇ^{-𝜅𝛥t} \end{pmatrix} \begin{pmatrix} 𝛥 \ln P_{t_n} - r \\ X_{t_n} \end{pmatrix} + \begin{pmatrix} U^p╻ \\ U^x╻ \end{pmatrix} \label{eq:cont-matrix} \end{align} where \begin{equation} \begin{pmatrix} U^p╻\\U^x╻ \end{pmatrix} = \int_{𝜏=0}^{𝛥t} \begin{pmatrix} 1 & (1-ⅇ^{-𝜅𝛥t}) \frac{𝜎}{𝜅} \\ 0 & ⅇ^{-𝜅𝛥t} \end{pmatrix} \begin{pmatrix} 𝜎 & 0\\ 𝜁𝜌 & 𝜁\sqrt{1{-}𝜌^2} \end{pmatrix} \begin{pmatrix} ⅆB^p_{t_n+𝜏}\\ⅆZ^x_{t_n+𝜏} \end{pmatrix} \label{eq:u_matrix} \end{equation} with~$ⅆB^x_t = 𝜌 ⅆB^p_t + \sqrt{1{-}𝜌^2} ⅆZ^x_t$ where~$B^p_t$ and~$Z^x_t$ are two independent brownian terms. The second step is to apply a linear transformation for the process~$X_t$ in~\eqref{eq:cont-matrix} so that we can relate the parameters of the transformed system to the parameters of the matrix form~\eqref{eq:disc-matrix} of the discrete-time VAR model. Thus, when we normalize time span~$𝛥t = 1$, since everything is in quarter, we get (for~$b_z, b_r > 0$) \begin{align} r & = r^f \label{eq:r} \\ 𝜃 & = \frac{a_z b_r}{𝜎_r(1{-}b_z)} + \frac{a_r + 𝜎_r^2/2}{𝜎_r} \label{eq:theta}\\ 𝜅 & = - \ln(b_z) \label{eq:kappa} \\ 𝜎 & = 𝜎_r \label{eq-sigma} \\ 𝜁 & = b_r \frac{𝜎_z}{𝜎_r} \label{eq:Sigma_theta}\\ 𝜌 & = \frac{𝜎_{rz}}{𝜎_r \, 𝜎_z} \label{eq-rho} \end{align} The appendix proves these results. Table~\ref{tab:Recovering} shows the value of the parameters of the continuous-time equivalent VAR implied by the \citet{Brandt-al.-2005} estimates. \begin{table} \centering \begin{tabular}{c>{\qquad}c} \toprule \textbf{Discrete-time world} & \textbf{Continuous-time world} \tabularnewline \midrule \multicolumn{2}{c}{\textbf{Models}}\tabularnewline \citet{Brandt-al.-2005} & \citet{Kim-Omberg-1996} \tabularnewline \begin{tabular}{r<{~=~}@{}l} $𝛥 \ln P_{t+1} - r^f$ & $a_r + b_r z_t + 𝜀^r_{t+1}$ \tabularnewline $z_{t+1}$ & $a_z + b_z z_t + 𝜀^z_{t+1}$ \tabularnewline $\mathrm{V}(𝜀^r_t)$ & $𝜎^2_r$ \tabularnewline $\mathrm{V}(𝜀^z_t)$ & $𝜎^2_s$ \tabularnewline $\mathrm{Cov}(𝜀^r_t, 𝜀^z_t)$ & $ 𝜎_{rz}$ \end{tabular} & \begin{tabular}{r<{~=~}@{}l} $ⅆ P^f_t/P^f_t$ & $r ⅆt$ \tabularnewline $ⅆ P_t/P_t$ & $(𝜎 X_t + r) ⅆ t + 𝜎 ⅆB_t^p$ \tabularnewline $ⅆ X_t$ & $𝜅(𝜃 - X_t) ⅆt + 𝜁 ⅆB_t^x$ \tabularnewline $ⅆB_t^p ⅆB_t^x$ & $𝜌 ⅆt$ \tabularnewline \end{tabular} \tabularnewline \midrule \multicolumn{2}{c}{\textbf{Parameter values}} \tabularnewline \citet{Brandt-al.-2005} & Our computations eqs~\eqref{eq:r}--\eqref{eq-rho} \tabularnewline \begin{tabular}{cr} $r^f$ & \num{.015} \tabularnewline $a_r$ & \num{.227} \tabularnewline $b_r$ & \num{.060} \tabularnewline $a_z$ & \num{-.155} \tabularnewline $b_z$ & \num{.958} \tabularnewline $𝜎^2_r$ & \num{.0060} \tabularnewline $𝜎^2_z$ & \num{.0049} \tabularnewline $𝜎_{rz}$ & \num{-.0051} \tabularnewline \end{tabular} & \begin{tabular}{cr} $r$ & \num{.015} \tabularnewline $𝜃$ & \num{0.111} \tabularnewline $𝜅$ & \num{0.0429} \tabularnewline $𝜎$ & \num{.0060} \tabularnewline $𝜁$ & \num{0.0542} \tabularnewline $𝜌$ & \num{-0.941} \tabularnewline \end{tabular} \tabularnewline \bottomrule \end{tabular} \caption{Recovering continuous-time parameters} \label{tab:Recovering} \end{table} \section{Portfolio choice problem in continuous time with CRRA preferences} We can now solve the portfolio choice problem of the investor with long-term horizons who faces to the investment opportunity set described in the previous section. We rely on the recent advances in~\citet{Honda-Kamimura-2011} who use the verification theorem and show that the explicit solution provided under continuous time is in fact an optimal solution especially for risk aversion greater that one. We consider an investor with initial wealth~$W_{t_0} > 0$ who has only two assets (riskless short-term bond and stocks) available for investment. The financial markets are incomplete. Furthermore, the investor can undertake continuous trading, he has no labor income and only cares about terminal wealth~$W_T$ where~$T$ is the finite planning horizon. The dynamics of price changes are described by~\eqref{eq:Pf} and~\eqref{eq:cont1}--\eqref{eq:cont2}. If~$𝛼_t$ is the share of wealth invested in stocks, the instantaneous wealth would be given by \begin{equation} \frac{ⅆ W_t}{W_t} = 𝛼_t\frac{ⅆ P_t}{P_t} + (1{-}𝛼_t) \frac{ⅆ P^f_t}{P^f_t} \end{equation} Properly substituting the dynamics of~$ⅆ P_t/P_t$ and~$ⅆ P^f_t/P^f_t$, wealth dynamics (also called the budget constraint) becomes: \begin{equation} ⅆ W_t = (𝛼_t 𝜎 X_t + r)W_t ⅆ t + 𝛼_t 𝜎 W_t ⅆ B^p_t \label{eq:budget-constraint} \end{equation} Notice that wealth process reflects uncertainty in instantaneous returns (term~$ⅆ B^p_t$) and about the state variable (the term~$X_t$). Given this formalization about wealth process, at time~$t_0$, the investor's optimization problem can then be expres\-sed as \begin{equation} \max_{𝛼_{t_0}} \quad \mathrm{E}_{t_0} \, ⅇ^{-𝛽 T} u(W_T) \quad \text{subject to the constraint \eqref{eq:budget-constraint}} \quad W_{t_0} \text{ fixed} \label{eq:J} \end{equation} where~$\mathrm{E}_{t_0}$ denotes the operator of conditional rational expectation at date~$t_0$, $𝛽$ the time discount parameter (with~$𝛽 > 0$) and~$u(∙)$ the utility function defined over terminal wealth. Let~$J(W_{t_0}, X_{t_0}, t_0)$ defines the value of the problem defined in~\eqref{eq:J} at time~$t_0$ \begin{equation} J(W_{t_0}, X_{t_0}, t_0) = \max_{𝛼_{t_0}} \quad \mathrm{E}_{t_0} \, ⅇ^{-𝛽 T} u(W_T) \label{eq:J} \end{equation} The Bellman equation generalizes this problem for every time~$t$ so that \begin{equation} J(W_t, X_t, t) = \max_{𝛼_t} ~ \mathrm{E}_t \, J(W_t{+}ⅆ W_t, X_t{+}ⅆ X_t, t{+}ⅆ t) \label{eq:Bellman_0} \end{equation} Equation~\eqref{eq:Bellman_0} emphasizes the fact that current optimal decisions depend on the conditional expected value of the problem which, in turn, is intimately linked to future wealth and the state variable. Applying Ito's lemma to the Bellman equation, we find that \begin{align} 0 = \max_{𝛼_t} ~ \bigg[ \frac{∂J}{∂W_t} (𝛼_t 𝜎 X_t + r) W_t &{} + \frac{∂J}{∂t} + \frac{∂J}{∂X_t} 𝜅 (𝜃 - X_t) + {} \nonumber \\ & \frac{1}{2} \frac{∂^2 J}{∂^2 W_t} 𝜎^2 𝛼^2_t W^2_t + \frac{1}{2} \frac{∂^2 J}{∂^2 X_t} 𝜁^2 + \frac{∂^2J}{∂W_t ∂X_t} 𝜎 𝛼_t 𝜁 𝜌 W_t \bigg] \label{eq:Bellman-Taylor} \end{align} The first order condition of equation~\eqref{eq:Bellman-Taylor} with respect to~$𝛼_t$ implies that \begin{equation} 𝛼^\star_t = \frac{∂J / ∂ W_t}{∂^2J / ∂^2W_t} \frac{1}{W_t} \frac{X_t}{𝜎} + \frac{∂^2 J / (∂W_t ∂X_t)}{∂^2J / ∂^2W_t} \frac{1}{W_t} \frac{𝜁}{𝜎} 𝜌 \label{Eq:alpha_op} \end{equation} \citet{Merton-1971} was the first to propose such additive decomposition between a \emph{myopic demand} (first term) and a \emph{hedging demand} (second term) of the optimal allocation to stocks. There is no hedging demand especially when the opportunity set is nonstochastic ($𝜁 = 0$) or when the opportunity set is uncorrelated with asset returns ($𝜌 = 0$). Now, we need to explicitly define the function~$J(∙)$. The first conjecture (see \citet{Kim-Omberg-1996}) is to assume \begin{equation} J(W_t, X_t, t) = ⅇ^{-𝛽 t} u(W_t) \, [f(X_t, t)]^𝛾 \end{equation} where~$f(∙)$ is an auxiliary function with the terminal condition~$f(X_T, T) = 1$. We consider the CRRA preferences~$u(W_t) = W_t^{1-𝛾}/(1{-}𝛾)$ where~$𝛾$ is the coefficient of relative risk aversion. Thus, the hedging demand in~\eqref{Eq:alpha_op} could straightforward be expressed as \[ \frac{∂ f / ∂ X_t}{f} \frac{𝜁}{𝜎} 𝜌 = \frac{∂ \ln f}{∂ X_t} \frac{𝜁}{𝜎} 𝜌 \] Then, under CRRA preferences hypothesis, the optimal allocation to stocks could be expressed as \begin{equation} 𝛼^\star_t = \frac{1}{𝛾} \frac{X_t}{𝜎} + \frac{∂ \ln f}{∂ X_t} \frac{𝜁}{𝜎} 𝜌 \label{eq:alpha-star} \end{equation} So, the Bellman equation~\eqref{eq:Bellman-Taylor} can be rewritten as \begin{align} 0 = &\frac{f'_t}{f} + \frac{1{-}𝛾}{𝛾} r - \frac{𝛽}{𝛾} + \frac{1{-}𝛾}{2\,𝛾^2} X_t^2 + \frac{f'_x}{f} \frac{1{-}𝛾}{𝛾} 𝜁 X_t 𝜌 + \frac{f'_x}{f} 𝜅 (𝜃 - X_t) + {} \nonumber \\ &\frac{f''_{xx}}{f} \frac{𝜁^2}{2} + \left( \frac{f'_x}{f} \right)^2 \frac{1{-}𝛾}{2} 𝜁^2 (𝜌^2-1) \label{eq:Bellman-Taylor-2} \end{align} where we use intuitive notations for the derivatives of the function~$f(∙)$. Equation~\eqref{eq:Bellman-Taylor-2} is a partial differential equation which admits analytical solutions especially if utility is logarithmic ($𝛾 = 1$ by l'Hopital's rule) or if markets are complete ($𝜌 = \pm 1$). The second conjecture is to assume \begin{equation} f(X_t, t) = \exp \left[ C_0(t) + C_1(t) \, X_t + \frac{1}{2} C_2(t) \, X^2_t \right] \end{equation} where~$C_0(t)$, $C_1(t)$ and~$C_2(t)$ are some undetermined time varying coefficients (with~$C_0(T) = C_1(T) = C_2(T) = 0$). Under this conjecture, using equation~\eqref{eq:alpha-star}, the optimal allocation to stocks is \begin{equation} 𝛼^\star_t = \frac{1}{𝛾} \frac{X_t}{𝜎} + [C_1(t) + C_2(t) \, X_t] \frac{𝜁}{𝜎} 𝜌 \label{eq:alpha_C} \end{equation} We only need to recover~$C_1(t)$ and~$C_2(t)$ coefficients. This conjecture was also used by~\citet{Kim-Omberg-1996} and by~\citet{Liu-2007} among others. More recently, \citet{Honda-Kamimura-2011} show that the explicit solution derived from the Bellman equation is in fact, even if the markets are incomplete, an optimal solution to the problem of the long-term investor who only care about terminal wealth and who have a risk aversion larger than unity. Let us substitute our second conjecture in the equation~\eqref{eq:Bellman-Taylor-2} \begin{align} 0 = \left[ \frac{ⅆ C_2}{ⅆ t} + a\,C_2^2 + b\,C_2 + c \right] X^2_t & + {} \left[ \frac{ⅆ C_1}{ⅆ t} + \frac{b}{2} C_1 + 𝜅 𝜃 C_2 + a \, C_1 C_2 \right] X_t + {} \nonumber \\ & \left[ \frac{ⅆ C_0}{ⅆ t} + \frac{1{-}𝛾}{𝛾} r - \frac{𝛽}{𝛾} + 𝜅 𝜃 C_1 + \frac{𝜁^2}{2} C_2 + \frac{a}{2} C_1^2 \right] \label{eq:system_0} \end{align} where~$a = [1 + (1{-}𝛾)(𝜌^2{-}1)] \, 𝜁^2$, $b = 2[(1{-}𝛾)/𝛾 𝜁 \, 𝜌 - 𝜅]$ and~$c = (1{-}𝛾)/𝛾^2$. As, whatever the value of~$X_t$, the equation~\eqref{eq:system_0} must hold, all terms within brackets are simultaneously set to zero to solve the equation for~$C_0(∙)$, $C_1(∙)$, and~$C_2(∙)$. Defining the discriminant~$D$ \[ D = b^2 - 4\,a\,c \] one can check that if~$𝛾 > 1$ then~$D > 0$. Thus, the two solutions of interest are given by \begin{align} \label{Eq:C_2} C_2(t) & = \frac{2\,c \left(1-ⅇ^{-𝛿(T-t)}\right)} {2𝛿-\left(b+𝛿\right)\left(1-ⅇ^{-𝛿(T-t)}\right)} \\ \label{Eq:C_1} C_1(t) & = \frac{4\,c\,𝜅𝜃}{𝛿}\,\frac{\left(1-ⅇ^{-𝛿(T-t)/2}\right)^2}{2𝛿-\left(b+𝛿\right)\left(1-ⅇ^{-𝛿(T-t)}\right)} \end{align} where~$𝛿$ denotes~$\sqrt D$. \citet{Kim-Omberg-1996} called this the normal solution and discussed about some alternative solutions those are beyond the scope of this paper. The appendix provides details about~\eqref{Eq:C_2} and~\eqref{Eq:C_1}. It is easy to see that there is a linear relation between~$C_1(∙)$ and~$C_2(∙)$. Then, for~$𝛾 > 1$, $C_1$ and~$C_2$ are always negative. As a result, since~$𝜌 < 0$, the hedging demand is always positive when the preferences are not logarithmic (more precisely for~$𝛾 > 1$) and the market price of risk is positive. \section{Numerical results} \begin{figure} \centering \def0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0{0}\def\xx{21}\def\xsecond{10}\def\xrounddigits{0} \def0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2{-.5}\def\yy{1.1}\def\ysecond{-.25}\def\yrounddigits{2} \defQuarters{Risk aversion ($𝛾$ parameter)} \defAllocation to stocks{Components of optimal allocation ($𝛼^\star$ parameter)} \def.6\textwidth}\def\ratioxy{.5}\def\grad{.75{.8\textwidth}\def\ratioxy{.65}\def\grad{.75} \pgfmathsetmacro{\xscale}{.6\textwidth}\def\ratioxy{.5}\def\grad{.75/1cm/(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)} \pgfmathsetmacro{\yscale}{\xscale*(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)/(\yy-0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2)*\ratioxy} \pgfmathsetmacro{\xgrad}{.75mm*\grad/(.6\textwidth}\def\ratioxy{.5}\def\grad{.75*\ratioxy)*(\yy-0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2)} \pgfmathsetmacro{\ygrad}{.75mm*\grad/.6\textwidth}\def\ratioxy{.5}\def\grad{.75*(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)} \pgfmathsetmacro{\xfirst}{5} \pgfmathsetmacro{\xlast}{.99*\xx} \pgfmathsetmacro{\ylast}{.95*\yy} \begin{tikzpicture}[xscale=\xscale,yscale=\yscale] \draw [->] (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, 0) -- (\xx, 0) ; \foreach \ix in {\xfirst, \xsecond, ..., \xlast} \draw (\ix, 0+\xgrad) -- (\ix, 0-\xgrad) node[anchor=north] {\pgfmathparse{\ix}\footnotesize\strut\num[group-digits = true, round-mode = places, round-precision = \xrounddigits]{\pgfmathresult}} ; \draw (.5*0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0+.5*\xx, 0) node[anchor=north] {\shortstack{\strut\\\small Quarters}} ; \draw [->] (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \yy) ; \foreach \iy in {0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2, \ysecond, ..., \ylast} \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0+\ygrad, \iy) -- (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0-\ygrad, \iy) node[anchor=east] {\pgfmathparse{\iy}\small\strut\num[group-digits = true, round-mode = places, round-precision = \yrounddigits]{\pgfmathresult}} ; \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, .5*0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2+.5*\yy) node[anchor=east] {\rotatebox{90}{\raisebox{8ex}{\small Allocation to stocks}}} ; \draw [very thick, black] plot [] file{explicit-solution-hd-wrt-gamma-10.txt} ; \draw (20, 0.04383) node[anchor=west] {\small \raisebox{-2ex}{$T=10$}} ; \draw [very thick, black] plot [] file{explicit-solution-hd-wrt-gamma-20.txt} ; \draw (20, 0.10366) node[anchor=west] {\small $T=20$} ; \draw [very thick, black] plot [] file{explicit-solution-hd-wrt-gamma-30.txt} ; \draw (20, 0.17345) node[anchor=west] {\small $T=30$} ; \draw [very thick, black] plot [] file{explicit-solution-hd-wrt-gamma-40.txt} ; \draw (20, 0.24573) node[anchor=west] {\small $T=40$} ; \draw [very thick, black, dashed] plot [] file{explicit-solution-md-wrt-gamma.txt} ; \end{tikzpicture} \caption{Myopic (dashed line) and hedging (solid line) demands as function of risk aversion for $X_{t_0} = 𝜃$} \label{fig:Myopic-hedging} \end{figure} \begin{table} \centering \small \tabcolsep1\tabcolsep \newbox{\abox}\savebox{\abox}{% \begin{tabular}{>{\bfseries} c >{\bfseries} c *5r @{\quad\quad} *5r} \toprule & & \multicolumn{5}{l}{$𝛾=5$} & \multicolumn{5}{l}{$𝛾=15$} \tabularnewline \cmidrule(r){3-7}\cmidrule(l){8-12} $T$ && \multicolumn{1}{c}{$X_{(10)}$}& \multicolumn{1}{c}{$X_{(30)}$}& \multicolumn{1}{c}{$X_{(50)}$}& \multicolumn{1}{c}{$X_{(70)}$}& \multicolumn{1}{c}{$X_{(90)}$}& \multicolumn{1}{c}{$X_{(10)}$}& \multicolumn{1}{c}{$X_{(30)}$}& \multicolumn{1}{c}{$X_{(50)}$}& \multicolumn{1}{c}{$X_{(70)}$}& \multicolumn{1}{c}{$X_{(90)}$}\tabularnewline \midrule 10 & MD & \num{-34.0} & \num{3.0} & \num{28.6} & \num{54.2} & \num{91.1} & \num{-11.3} & \num{1.0} & \num{9.5} & \num{18.1} & \num{30.4} \tabularnewline & HD & \num{-10.9} & \num{3.5} & \num{13.5} & \num{23.6} & \num{38.0} & \num{-4.6} & \num{1.5} & \num{5.7} & \num{9.9} & \num{15.9} \tabularnewline \midrule 20 & MD & \num{-34.0} & \num{3.0} & \num{28.6} & \num{54.2} & \num{91.1} & \num{-11.3} & \num{1.0} & \num{9.5} & \num{18.1} & \num{30.4} \tabularnewline & HD & \num{-15.9} & \num{10.8} & \num{29.2} & \num{47.7} & \num{74.3} & \num{-7.2} & \num{4.9} & \num{13.3} & \num{21.6} & \num{33.7} \tabularnewline \midrule 30 & MD & \num{-34.0} & \num{3.0} & \num{28.6} & \num{54.2} & \num{91.1} & \num{-11.3} & \num{1.0} & \num{9.5} & \num{18.1} & \num{30.4} \tabularnewline & HD & \num{-16.0} & \num{19.8} & \num{44.7} & \num{69.5} & \num{105.3} & \num{-7.7} & \num{9.8} & \num{21.9} & \num{34.1} & \num{51.6} \tabularnewline \midrule 40 & MD & \num{-34.0} & \num{3.0} & \num{28.6} & \num{54.2} & \num{91.1} & \num{-11.3} & \num{1.0} & \num{9.5} & \num{18.1} & \num{30.4} \tabularnewline & HD & \num{-13.2} & \num{29.1} & \num{58.3} & \num{87.6} & \num{129.8} & \num{-6.5} & \num{15.5} & \num{30.7} & \num{46.0} & \num{68.0} \tabularnewline \bottomrule \end{tabular}% } \centering \begin{minipage}{\widthof{\usebox{\abox}}-\widthof{~~}} {\centering \makebox[0pt]{\usebox{\abox}}\par } \footnotesize For each risk aversion~$𝛾$, the first line reports the myopic demand (\textbf{MD}) and the second line the hedging demand (\textbf{HD}) without short selling constraints. We present the results for 5 different initial values of the Sharpe ratio~$X$. Each value corresponds to the~$p$-th percentile of the unconditional distribution of~$X$, defined by equations~\eqref{eq:Unc-dist} and denoted by~$X_{(p)}$, where~$p$ takes values 10, 30, 50, 70, and 90 (then~$X_{(50)} = 𝜃$). \end{minipage} \caption{Myopic and hedging demands for investment horizon of $T$ quarters (\%)} \label{tab:Myopic-hedging} \end{table} As mentioned above, we illustrate our approach using the well documented \citet{Brandt-al.-2005} example. Table~\ref{tab:Recovering} collects the continuous-time parameters recovered from this example. For comparison purposes, we also use the \citet{Garlappi-Skoulakis-2009} results, obtained from the same discrete-time VAR(1) estimates and by means of a sophisticated numerical method. Figure~\ref{fig:Myopic-hedging} and table~\ref{tab:Myopic-hedging} help to understand the long-term investor problem. For~$𝛾 = 1$ \emph{i.e.} the case of logarithmic utility, no hedging demand is required. For this case, the dynamic portfolio choice reduces to static one whatever the time horizon. Otherwise, for~$ 𝛾 > 1$ and horizon longer than one, under CRRA preferences and mean reverting returns, agent should have a positive hedging demand to prevent adverse changes in investment opportunities \citep{Merton-1971}. However, for~$𝛾 \to \infty$, more specifically for a very conservative agent, stocks are not attractive. Thus, he would not invest into stocks since the total demand (sum of myopic demand and hedging demand) converges toward zero. Our results reset all theses basic important features. The total demand is sensitive to risk aversion. Results from previous studies imply that myopic and hedging demands are more sensitive to small values of risk aversion. We confirm this and argue that the sensitivity of hedging demand to state variable is maximal near the critical point~$𝛾 \approx 2$. Our equation~\eqref{eq:alpha_C} and figure~\ref{fig:Myopic-hedging} show this evidence. To quantitatively see this, just evaluate the derivative of~$𝛼^\star$ with respect to~$X$. Table~\ref{tab:Myopic-hedging} reports the evidence that both myopic and hedging demand are sensitive to initial value of Sharpe ratio. These two components of optimal allocation individually increase with the percentile of the Sharpe ratio unconditional distribution. Thus, the total demand exhibits the same behavior. This is consistent with \citet{Campbell-al.-2004} among others. In fact, high Sharpe ratio or equivalently high risk premium relative to volatility signals better investment opportunities. Therefore, optimal fraction to allocate into stocks should increase from the knowledge of mean reverting parameter that serves to quantify the expected Sharpe ratio. Myopic demand is independent from time horizon while hedging demand increases nonlinearly with time horizon. However, table~\ref{tab:Myopic-hedging} quantitative figures suggest that this relation is almost linear but small changes in horizon induce substantial hedging demand. Horizon effect is important but quiet monotonic for a given percentile of the state variable unconditional distribution. All changes in total demand for fixed risk aversion and state variable are due to changes in horizon and are large for small risk aversion. The horizon effect on hedging demand is important in optimal allocation because it widely dominates for longer horizons. In fact, when horizon is greater than 20 quarters, hedging demand becomes always greater than myopic demand when the Sharpe ratio initial value is between~30 and~70 percentiles. \begin{table} \centering \small \tabcolsep1\tabcolsep \newbox{\abox}\savebox{\abox}{% \begin{tabular}{>{\bfseries} c >{\bfseries} c *5r @{\quad\quad} *5r} \toprule & & \multicolumn{5}{l}{$𝛾=5$} & \multicolumn{5}{l}{$𝛾=15$} \tabularnewline \cmidrule(r){3-7}\cmidrule(l){8-12} $T$ && \multicolumn{1}{c}{$X_{(10)}$}& \multicolumn{1}{c}{$X_{(30)}$}& \multicolumn{1}{c}{$X_{(50)}$}& \multicolumn{1}{c}{$X_{(70)}$}& \multicolumn{1}{c}{$X_{(90)}$}& \multicolumn{1}{c}{$X_{(10)}$}& \multicolumn{1}{c}{$X_{(30)}$}& \multicolumn{1}{c}{$X_{(50)}$}& \multicolumn{1}{c}{$X_{(70)}$}& \multicolumn{1}{c}{$X_{(90)}$}\tabularnewline \midrule 10 & LT & \num{0.0} & \num{6.5} & \num{42.1} & \num{77.7} & \num{100.0} & \num{0.0} & \num{2.5} & \num{15.2} & \num{27.9} & \num{46.3} \tabularnewline & GS & \num{0.0} & \num{13.3} & \num{43.2} & \num{73.1} & \num{100.0} & \num{0.0} & \num{4.3} & \num{15.4} & \num{27.0} & \num{44.7} \tabularnewline & $𝛥$ & \num{0.0} & \num{-6.8} & \num{-1.1} & \num{4.6} & \num{0.0} & \num{0.0} & \num{-1.8} & \num{-0.2} & \num{0.9} & \num{1.6} \tabularnewline \midrule 20 & LT & \num{0.0} & \num{13.7} & \num{57.8} & \num{100.0} & \num{100.0} & \num{0.0} & \num{5.9} & \num{22.8} & \num{39.7} & \num{64.1} \tabularnewline & GS & \num{0.0} & \num{24.4} & \num{57.2} & \num{89.7} & \num{100.0} & \num{0.0} & \num{10.7} & \num{25.1} & \num{40.4} & \num{63.2} \tabularnewline & $𝛥$ & \num{0.0} & \num{-10.7} & \num{0.6} & \num{10.3} & \num{0.0} & \num{0.0} & \num{-4.8} & \num{-2.3} & \num{-0.7} & \num{0.9} \tabularnewline \midrule 30 & LT & \num{0.0} & \num{22.8} & \num{73.2} & \num{100.0} & \num{100.0} & \num{0.0} & \num{10.8} & \num{31.5} & \num{52.1} & \num{81.9} \tabularnewline & GS & \num{0.0} & \num{32.8} & \num{68.4} & \num{100.0} & \num{100.0} & \num{0.0} & \num{17.5} & \num{35.2} & \num{54.0} & \num{80.7} \tabularnewline & $𝛥$ & \num{0.0} & \num{-10.0} & \num{4.8} & \num{0.0} & \num{0.0} & \num{0.0} & \num{-6.7} & \num{-3.7} & \num{-1.9} & \num{1.2} \tabularnewline \midrule 40 & LT & \num{0.0} & \num{32.0} & \num{86.9} & \num{100.0} & \num{100.0} & \num{0.0} & \num{16.5} & \num{40.2} & \num{64.0} & \num{98.3} \tabularnewline & GS & \num{0.0} & \num{38.8} & \num{77.6} & \num{100.0} & \num{100.0} & \num{0.0} & \num{24.1} & \num{44.5} & \num{65.7} & \num{94.6} \tabularnewline & $𝛥$ & \num{0.0} & \num{-6.8} & \num{9.3} & \num{0.0} & \num{0.0} & \num{0.0} & \num{-7.6} & \num{-4.3} & \num{-1.7} & \num{3.7} \tabularnewline \bottomrule \end{tabular}% } \centering \begin{minipage}{\widthof{\usebox{\abox}}-\widthof{~~}} {\centering \makebox[0pt]{\usebox{\abox}}\par } \footnotesize For each risk aversion~$𝛾$, the first line reports our results (\textbf{LT} – optimal allocation to stocks in continuous time), the second line the \citet{Garlappi-Skoulakis-2009} results (\textbf{GS} – optimal allocation to stocks in discrete time), and the third line reports the difference between our results and \citet{Garlappi-Skoulakis-2009} results. We present the results for 5 different initial values of the Sharpe ratio~$X$ calibrated using the same estimates involving dividend price ratio as in \textbf{GS}. Each value corresponds to the~$p$-th percentile of the unconditional distribution of~$X$, defined by equation~\eqref{eq:Unc-dist} and denoted by~$X_{(p)}$, where~$p$ takes values 10, 30, 50, 70, and 90. \end{minipage} \caption{Optimal allocation to stocks for investment horizon of $T$ quarters (\%)} \label{tab:Optimal} \end{table} \begin{figure} \centering \def0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0{0}\def\xx{40}\def\xsecond{5}\def\xrounddigits{0} \def0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2{0}\def\yy{100}\def\ysecond{20}\def\yrounddigits{0} \defQuarters{Horizon in quarters ($T$ parameter)} \defAllocation to stocks{Allocation to stocks (\%)} \def.6\textwidth}\def\ratioxy{.5}\def\grad{.75{.6\textwidth}\def\ratioxy{.5}\def\grad{.75} \pgfmathsetmacro{\xscale}{.6\textwidth}\def\ratioxy{.5}\def\grad{.75/1cm/(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)} \pgfmathsetmacro{\yscale}{\xscale*(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)/(\yy-0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2)*\ratioxy} \pgfmathsetmacro{\xgrad}{.75mm*\grad/(.6\textwidth}\def\ratioxy{.5}\def\grad{.75*\ratioxy)*(\yy-0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2)} \pgfmathsetmacro{\ygrad}{.75mm*\grad/.6\textwidth}\def\ratioxy{.5}\def\grad{.75*(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)} \begin{tikzpicture}[xscale=\xscale,yscale=\yscale] \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (\xx, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) ; \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \yy) -- (\xx, \yy) ; \foreach \ix in {0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \xsecond, ..., \xx} \draw (\ix, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2+\xgrad) -- (\ix, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2-\xgrad) node[anchor=north] {\pgfmathparse{\ix}\footnotesize\strut\num[group-digits = true, round-mode = places, round-precision = \xrounddigits]{\pgfmathresult}} ; \draw (.5*0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0+.5*\xx, 0) node[anchor=north] {\shortstack{\strut\\\small Quarters}} ; \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \yy) ; \draw (\xx, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (\xx, \yy) ; \foreach \iy in {0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2, \ysecond, ..., \yy} \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0+\ygrad, \iy) -- (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0-\ygrad, \iy) node[anchor=east] {\pgfmathparse{\iy}\small\strut\num[group-digits = true, round-mode = places, round-precision = \yrounddigits]{\pgfmathresult}} ; \foreach \iy in {0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2, \ysecond, ..., \yy} \draw (\xx, \iy) -- (\xx-\ygrad, \iy) ; \foreach \iy in {0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2, \ysecond, ..., \yy} \draw [dotted] (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \iy) -- (\xx-\ygrad, \iy) ; \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, .5*0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2+.5*\yy) node[anchor=east] {\rotatebox{90}{\raisebox{4ex}{\small Allocation to stocks}}} ; \draw [very thick, black] plot [] file{explicit-solution-wrt-T-5-0.100000.txt} ; \draw (40, 0) node[anchor=west] {\footnotesize $X_{t_0} = X_{(10)}$} ; \draw [very thick, black] plot [] file{explicit-solution-wrt-T-5-0.300000.txt} ; \draw (40, 32.046) node[anchor=west] {\footnotesize $X_{t_0} = X_{(30)}$} ; \draw [very thick, black] plot [] file{explicit-solution-wrt-T-5-0.500000.txt} ; \draw (40, 86.893) node[anchor=west] {\footnotesize $X_{t_0} = X_{(50)}$} ; \draw [very thick, black] plot [] file{explicit-solution-wrt-T-5-0.700000.txt} ; \draw (40, 100) node[anchor=west] {\footnotesize $X_{t_0} = X_{(70)}$} ; \draw [very thick, black] plot [] file{explicit-solution-wrt-T-5-0.900000.txt} ; \draw (0, 100) node[anchor=south west] {\footnotesize $X_{t_0} = X_{(90)}$} ; \end{tikzpicture} \def0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0{0}\def\xx{40}\def\xsecond{5}\def\xrounddigits{0} \def0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2{0}\def\yy{100}\def\ysecond{20}\def\yrounddigits{0} \defQuarters{Horizon in quarters ($T$ parameter)} \defAllocation to stocks{Allocation to stocks (\%)} \def.6\textwidth}\def\ratioxy{.5}\def\grad{.75{.6\textwidth}\def\ratioxy{.5}\def\grad{.75} \pgfmathsetmacro{\xscale}{.6\textwidth}\def\ratioxy{.5}\def\grad{.75/1cm/(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)} \pgfmathsetmacro{\yscale}{\xscale*(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)/(\yy-0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2)*\ratioxy} \pgfmathsetmacro{\xgrad}{.75mm*\grad/(.6\textwidth}\def\ratioxy{.5}\def\grad{.75*\ratioxy)*(\yy-0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2)} \pgfmathsetmacro{\ygrad}{.75mm*\grad/.6\textwidth}\def\ratioxy{.5}\def\grad{.75*(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)} \begin{tikzpicture}[xscale=\xscale,yscale=\yscale] \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (\xx, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) ; \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \yy) -- (\xx, \yy) ; \foreach \ix in {0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \xsecond, ..., \xx} \draw (\ix, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2+\xgrad) -- (\ix, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2-\xgrad) node[anchor=north] {\pgfmathparse{\ix}\footnotesize\strut\num[group-digits = true, round-mode = places, round-precision = \xrounddigits]{\pgfmathresult}} ; \draw (.5*0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0+.5*\xx, 0) node[anchor=north] {\shortstack{\strut\\\small Quarters}} ; \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \yy) ; \draw (\xx, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (\xx, \yy) ; \foreach \iy in {0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2, \ysecond, ..., \yy} \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0+\ygrad, \iy) -- (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0-\ygrad, \iy) node[anchor=east] {\pgfmathparse{\iy}\small\strut\num[group-digits = true, round-mode = places, round-precision = \yrounddigits]{\pgfmathresult}} ; \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, .5*0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2+.5*\yy) node[anchor=east] {\rotatebox{90}{\raisebox{4ex}{\small Allocation to stocks}}} ; \foreach \iy in {0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2, \ysecond, ..., \yy} \draw (\xx, \iy) -- (\xx-\ygrad, \iy) ; \foreach \iy in {0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2, \ysecond, ..., \yy} \draw [dotted] (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \iy) -- (\xx-\ygrad, \iy) ; \draw [very thick, black] plot [] file{explicit-solution-wrt-T-15-0.100000.txt} ; \draw (40, 0) node[anchor=west] {\small $X_{t_0} = X_{(10)}$} ; \draw [very thick, black] plot [] file{explicit-solution-wrt-T-15-0.300000.txt} ; \draw (40, 16.456) node[anchor=west] {\small $X_{t_0} = X_{(30)}$} ; \draw [very thick, black] plot [] file{explicit-solution-wrt-T-15-0.500000.txt} ; \draw (40, 40.234) node[anchor=west] {\small $X_{t_0} = X_{(50)}$} ; \draw [very thick, black] plot [] file{explicit-solution-wrt-T-15-0.700000.txt} ; \draw (40, 64.012) node[anchor=west] {\small $X_{t_0} = X_{(70)}$} ; \draw [very thick, black] plot [] file{explicit-solution-wrt-T-15-0.900000.txt} ; \draw (40, 98.343) node[anchor=west] {\small $X_{t_0} = X_{(90)}$} ; \end{tikzpicture} \caption{Optimal allocation to stocks as function of the horizon for~$𝛾=5$ (first panel) and for~$𝛾=15$ (second panel) for 5 different initial values of the Sharpe ratio~$X$ (as in table~\ref{tab:Myopic-hedging} or~\ref{tab:Optimal})} \label{fig:Optimal} \end{figure} We finally use the common assumption of no-borrowing and short-sale constraints. Thus, in table~\ref{tab:Optimal}, we restrict all portfolio weights between~0 and~1. One can notice that we generally obtain values fairly close to those of~\citet{Garlappi-Skoulakis-2009} while frameworks are not the same. \citet{Garlappi-Skoulakis-2009} worked in discrete-time and initial values of their state variable are drawn for the unconditional distribution of quarterly dividend price ratio. They use a sophisticated numerical optimization technique. We work in continuous time (no numerical optimization) and our initial values are computed using the unconditional distribution of continuous Sharpe ratio that we discretized in points observation and recovered using the same quarterly dividend price ratio. However, a closer inspection of table~\ref{tab:Optimal} figures show that the optimal allocation to stocks is more sensitive to the state variable and to the time horizon than the sensitivity obtained by~\citet{Garlappi-Skoulakis-2009}. We run some numerical simulations, within the discrete-time framework, to evaluate our results in order to find the causes of the discrepancies between the two frameworks. We were unable to qualitatively and quantitatively invalidate our results. \begin{figure} \centering \def0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0{0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0} \def0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2{0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2} \defQuarters{Quarters} \defAllocation to stocks{Allocation to stocks} \def.6\textwidth}\def\ratioxy{.5}\def\grad{.75{.6\textwidth}\def\ratioxy{.5}\def\grad{.75} \pgfmathsetmacro{\xscale}{.6\textwidth}\def\ratioxy{.5}\def\grad{.75/1cm/(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)} \pgfmathsetmacro{\yscale}{\xscale*(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)/(\yy-0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2)*\ratioxy} \pgfmathsetmacro{\xgrad}{.75mm*\grad/(.6\textwidth}\def\ratioxy{.5}\def\grad{.75*\ratioxy)*(\yy-0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2)} \pgfmathsetmacro{\ygrad}{.75mm*\grad/.6\textwidth}\def\ratioxy{.5}\def\grad{.75*(\xx-0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0)} \pgfmathsetmacro{\yyy}{1.0001*\yy} \begin{tikzpicture}[xscale=\xscale,yscale=\yscale] \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (\xx, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) ; \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \yy) -- (\xx, \yy) ; \foreach \ix in {0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \xsecond, ..., \xx} \draw (\ix, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2+\xgrad) -- (\ix, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2-\xgrad) node[anchor=north] {\pgfmathparse{\ix}\footnotesize\strut\num[group-digits = true, round-mode = places, round-precision = \xrounddigits]{\pgfmathresult}} ; \foreach \ix in {0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \xsecond, ..., \xx} \draw [dotted] (\ix, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (\ix, \yy) ; \draw (.5*0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0+.5*\xx, 0) node[anchor=north] {\shortstack{\strut\\\small Quarters}} ; \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \yy) ; \draw (\xx, 0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2) -- (\xx, \yy) ; \foreach \iy in {0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2, \ysecond, ..., \yyy} \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0+\ygrad, \iy) -- (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0-\ygrad, \iy) node[anchor=east] {\pgfmathparse{\iy}\small\strut\num[group-digits = true, round-mode = places, round-precision = \yrounddigits]{\pgfmathresult}} ; \foreach \iy in {0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2, \ysecond, ..., \yy} \draw (\xx, \iy) -- (\xx-\ygrad, \iy) ; \foreach \iy in {0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2, \ysecond, ..., \yy} \draw [dotted] (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, \iy) -- (\xx-\ygrad, \iy) ; \draw (0}\def\xx{10}\def\xsecond{1}\def\xrounddigits{0, .5*0}\def\yy{.25}\def\ysecond{.05}\def\yrounddigits{2+.5*\yy) node[anchor=east] {\rotatebox{90}{\raisebox{7ex}{\small Allocation to stocks}}} ; \draw [very thick, black, dashed] (0, .05) -- (1, .05) -- (2, .05) -- (2, .10) -- (3, .10) -- (4, .10) -- (5, .10) -- (6, .10) -- (7, .10) -- (8, .10)-- (8, .15) -- (9, .15) -- (10, .15) ; \draw [very thick, black] (0, 0.065) -- (1, 0.074) -- (2, 0.082) -- (3, 0.089) -- (4, 0.096) -- (5, 0.101) -- (6, 0.106) -- (7, 0.110) -- (8, 0.114) -- (9, 0.117) -- (10, 0.119) ; \end{tikzpicture} \caption{Path of optimal allocation to stocks for~$𝛾=5$, $X_0 = X_{(30)}$, and~$T = 10$ obtained by explicit solution (solid line) and by simulation and trial~$\{ \num{.05}, \num{.10}, \num{.15}, \num{.20}, \num{.25} \}$ grid (dashed line)} \label{fig:simulation} \end{figure} To test our results, we run some forward pure simulations in discrete time. More precisely, for instance, we explore the case where the initial value of the Sharpe ratio is the 30-th percentile ($X_0 = X_{(30)}$, the relative risk aversion is equal to~$5$ ($𝛾 = 5$), and the planning horizon is equal to~$10$ ($T = 10$ quarters). With this configuration, when we get an initial optimal allocation to stocks of~$\num{.065}$, \citet{Garlappi-Skoulakis-2009} obtain twice as many ($\num{.133}$, see table~\ref{tab:Optimal}). That's large. Thus, we first build a sample of size \num{100000} for~$z_{t+1}$, $z_{t+2}$, …, and~$z_{t+10}$ and for~$𝛥 \ln P_{t+1}$, $𝛥 \ln P_{t+2}$, …, and~$𝛥 \ln P_{t+10}$ using the restricted VAR(1) eqs~\eqref{eq:disc1}--\eqref{eq:disc2}. We choose the grid~$G = \{ \num{.05}, \num{.10}, \num{.15}, \num{.20}, \num{.25} \}$ for trial allocations to stocks, to overlay both our and \citet{Garlappi-Skoulakis-2009} solutions. Then, for each path in the sample, the value of terminal wealth is computed from the cartesian product~$G \times G \times ⋯ \times G$ of all possible strategies. The computational burden is very high as we evaluate~$5^{10} = \num{9765625}$ strategies. Figure~\ref{fig:simulation} shows that the forward path in discrete time (no numerical optimization) closes to the path of our explicit solution particularly at the critical starting point, the 30-th percentile of the state variable for small risk aversion ($𝛾 = 5$). \section*{Conclusion} We examine the ``continuous-time detour'' to solve the long-term investor problem when the stock returns are predictable. We obtain an explicit optimal solution in the continuous-time world and, after recovering the continuous-time parameters from the discrete-time world estimates, we reuse such solution to assess the sensitivities of optimal allocation to the initial values of the state variable, to the risk aversion and to the time horizon. We find greater sensitivities than those reported in the literature. We also find that the sensitivity of total demand to the state variable is not uniform along the unconditional distribution of the state variable. Previous numerical approximation techniques that deal with the problem we consider are subject to some numerical errors. Therefore, they do not always provide accurate results. We show that the hedging demand part of allocation dominates at longer horizons and it is very sensitive to state variable especially when risk aversion decreases and/or the time horizon increases. This finding could explain the low accuracy of discrete numerical methods especially along the tails of the unconditional distribution of the state variable. \section{Introduction} \label{intro} Your text comes here. Separate text sections with \section{Section title} \label{sec:1} Text with citations \cite{RefB} and \cite{RefJ}. \subsection{Subsection title} \label{sec:2} as required. Don't forget to give each section and subsection a unique label (see Sect.~\ref{sec:1}). \paragraph{Paragraph headings} Use paragraph headings as needed. \begin{equation} a^2+b^2=c^2 \end{equation} \begin{figure} \includegraphics{example.eps} \caption{Please write your figure caption here} \label{fig:1} \end{figure} \begin{figure*} \includegraphics[width=0.75\textwidth]{example.eps} \caption{Please write your figure caption here} \label{fig:2} \end{figure*} \begin{table} \caption{Please write your table caption here} \label{tab:1} \begin{tabular}{lll} \hline\noalign{\smallskip} first & second & third \\ \noalign{\smallskip}\hline\noalign{\smallskip} number & number & number \\ number & number & number \\ \noalign{\smallskip}\hline \end{tabular} \end{table}
{ "timestamp": "2015-04-14T02:11:41", "yymm": "1504", "arxiv_id": "1504.03079", "language": "en", "url": "https://arxiv.org/abs/1504.03079" }
\section{Introduction} Advanced-generation interferometric gravitational wave detectors, such as Advanced LIGO \cite{2015advancedligo}, Advanced Virgo \cite{virgo2009advanced} and KAGRA \cite{somiya2012detector} are currently being commissioned. Their sensitivity is expected to surpass that achieved by first generation instruments by almost an order of magnitude in the high frequency region. To achieve this, very high circulating power levels (0.5-1 MW) will be stored within the Fabry-Perot arm cavities. At these power levels, even low levels of optical absorption can lead to significant thermoelastic distortion of optical surfaces and unacceptable levels of wavefront distortion \cite{lawrence2003active}, resulting in reduced circulating power and a reduction in the efficiency of the detector signal readout. Thermally actuated compensation systems will be thus used to ameliorate the wavefront distortion. However, the thermal time constants for the absorption-induced distortion and the compensation are long, typically 12 hours, and thus incorporating predictive modeling in the control systems may prove essential. The response of a linear elastic system to heating is described by the theory of thermo-elasticity and its applications to highly symmetric, idealized systems are described in many books (see \cite{boley2012theory} for example). It has also been used to develop analytic expressions for less idealized optical systems \cite{hello1990analytical,lawrence2003active}. The expressions developed by Hello and Vinet \cite{hello1990analytical} are relevant to the work described here, but apply only to cylindrical isotropic mirrors heated by coaxial laser beams. More complicated systems, which incorporate asymmetric heating or anisotropic elasticity, can be investigated using finite-element numerical models that apply the equations of thermo-elasticity on a three-dimensional spatial mesh. For dynamic systems, the thermoelastic equations must be solved at each epoch, requiring computational times that can run to many days. This approach would be untenable for use in predictive feed-forward actuation to control systems. In such cases, the solution of the scalar problem to determine the temperature profile throughout the optic can be solved rapidly; the time consuming part is solving the tensor-based elasticity problem to convert the thermal profile into an elastic distortion. The Betti-Maxwell theorem of elastodynamic reciprocity \cite{achenbach2006reciprocity} provides an alternative approach to using finite-element methods (FEM) to solve the tensor part of the thermoelastic distortion. It has previously been used to investigate the excitation of Rayleigh-Lamb elastic waves in a metal plate due to heating produced by a line-focused pulsed laser beam assuming that the heating is confined to the surface of the plate and it has infinite lateral extent \cite{achenbach2005thermoelasticity,achenbach2007application}. In the context of gravitational wave detection, it has been used to compute the interferometer's response to creep events in the fibers that suspend the optics \cite{levin2012creep}. We extend its use to predict thermoelastic distortion of an optic of finite size with asymmetric heating. We describe here how elastodynamic reciprocity and FEM can be combined to provide accurate predictions of thermoelastic surface distortion more quickly than using FEM alone. In summary, FEM is used to determine the response of the optic to a set of orthonormal tractions, or pressures \textemdash a computationally expensive calculation that is performed once for an optic. Then, using reciprocity, the distortion due to the instantaneous temperature profile in the optic is calculated using a sum of scalar volume integrals that incorporate these responses. The computational cost of this step is much less than that of a full elastostatic FEM evaluation. Additionally, it is amenable to parallelization, which would further reduce the computational time. The layout of the rest of the paper is as follows: in Section II we introduce the Betti-Maxwell theorem of elastodynamics and show how it can be used to determine the surface distortion by careful choice of a suitable ‘auxiliary’ elastic system. We demonstrate its application by calculating the distortion of the end face of a cylindrical optic that is heated by a Gaussian heat flux that is (a) coaxial with and (b) laterally displaced from the axis. The approach and model are described in Sections III and IV. Finally, the resulting surface distortions are presented in Section V and compared with the results of elastostatic FEM calculations. Computation times for these two approaches are compared in Section VI \section{Elastodynamic Reciprocity and Thermal Distortion} The Betti-Maxwell reciprocity theorem for elastodynamics \cite{achenbach2006reciprocity,achenbach2007application} specifies the relationship between the displacement \(\vec{u}(\vec{r},t)\) that results from an applied surface traction \(\vec{t}(\vec{r},t)\) and internal body force \(\vec{f}(\vec{r},t)\) for two elastic states of a linear elastic body: \begin{multline} \int_{S}{(t_i^2u_i^1-t_i^1u_i^2)}dS \\ =\int_V\Big[(f_i^1-\rho\ddot{u}_i^1)u_i^2-(f_i^2-\rho\ddot{u}_i^2)u_i^1 \Big]dV \label{equation1} \end{multline} where \(\rho\) is the density, \(\ddot{u}\) is acceleration, the superscripts \(1\) and \(2\) represent the two states, and the Einstein summation convention is us{ed. If \(t_i(\vec{r},t)=t_i(\vec{r})e^{i\omega t}\) and \(f_i(\vec{r},t)=f_i(\vec{r})e^{i\omega t}\) then \(u_i(\vec{r},t)=u_i(\vec{r})e^{i\omega t}\), and thus Eq. (\ref{equation1}) becomes \begin{multline} \int_S \Big( t^2_i(\vec{r}) u^1_i(\vec{r})-t^1_i(\vec{r}) u^2_i(\vec{r})\Big) dS \\ = \int_V \Big(f^1_i(\vec{r}) u^2_i(\vec{r})-f^2_i(\vec{r}) u^1_i(\vec{r})\Big)dV \label{equation2} \end{multline} We shall use this theorem to determine the surface displacement (distortion) due to heating of an optic by, for example, partial absorption of an incident laser beam. For the first state, which we shall refer to as the “thermal state” and label \(T\), we assume that the optic is free and thus \(t_i^T=0\), and there is a non-zero body force due to the heating. Since we are interested in the distortion of the end face of the optic, we choose the second state, which is often referred to as the “auxiliary state” and we shall label \(A\) , to have a traction \(t_z^A\) applied to the end face of the optic and assume \(f_i^A=0\). Thus, Eq. (\ref{equation2}) becomes \begin{multline} \int_S t^A_z(\vec{r}) u^T_z(\vec{r})dS \\ = \int_V f^T_i(\vec{r}) u^A_i(\vec{r})dV = \int_S \sigma^T_{ij}(\vec{r}) \varepsilon^A_{ij}(\vec{r})dV \label{equation3} \end{multline} where \(\varepsilon_{ij}^A(\vec{r})\) is the internal strain produced by the traction \(t_z^A(\vec{r})\) , and \(\sigma_{ij}^T(\vec{r})\) is the internal stress associated with the body force: \(f_i^T=-\frac{\partial \sigma_{ij}^T}{\partial x_j}\) . Consider now applying time-harmonic tractions with amplitude $t_z^A(\vec{r}_s)=\chi_n(\vec{r}_s)$ $n = 1,2,...$. It is convenient to choose $\chi_n(\vec{r}_s)$ to be orthonormal, so that $\int \chi_n(\vec{r}_s)\chi_m(\vec{r}_s)dS=\delta_{nm}$. Then, expressing the surface displacement amplitude as: \begin{equation} u^T_z(\vec{r}) = \sum_{m} a_m\chi_m (\vec{r}) \label{equation4} \end{equation} transforms the left-hand term of Eq. (\ref{equation3}) to \begin{equation} \int_S t^A_z(\vec{r}) u^T_z(\vec{r}) dS = \int_S \chi_n (\vec{r}) \sum_{m} a_m\chi_m (\vec{r}) dS = a_n. \label{equation5} \end{equation} Therefore \begin{equation} a_n = \int_V \sigma^T_{ij}(\vec{r}) \varepsilon^A_{ij}(\vec{r})dV \label{equation6} \end{equation} That is, if the amplitude of the elastic response of the optic, \(\epsilon_{ij}^A(\vec{r})\) , to each of the tractions \(\chi_n(\vec{r})\) is known then the amplitude of the distortion of the end face of the optic, \(u_z^T(\vec{r})\) , due to any thermal stress distribution can be calculated using Eqs. (\ref{equation4}) and (\ref{equation6}). We shall use this approach to calculate the surface distortion due to non-uniform heating of a homogeneous isotropic body for which \begin{equation} \sigma^T_{ij}(\vec{r}) = \frac{-E\alpha}{1-2\nu} \Delta T(\vec{r})\delta_{ij} \label{equation7} \end{equation} where \(E \) is Young's modulus, \(\alpha\) is the coefficient of thermal expansion, \(\nu\) is Poisson's ratio, \(\Delta T=T(\vec{r})-T_0\) , and \(T_0\) is the ambient temperature. Eq. \ref{equation6} thus becomes \begin{equation} a_n = \frac{-E\alpha}{1-2\nu} \int_V(T(\vec{r})-T_0) \textrm{Tr}\big\{ \varepsilon^A (\vec{r}) \big\} dV \label{equation8} \end{equation} \section{Implementation} To determine the distortion of the end-face using reciprocity, one must first characterize the response of the elastic system, \(\varepsilon_{ij}^A(\vec{r})\) , to a set of orthonormal basis tractions \(t_z^A(\vec{r},t)=\chi_n(\vec{r}) \exp{[i\omega t]}:n = 1, ....,N\) using an elastostatic FEM [11]. Zernike functions would be a tempting choice given our cylindrical geometry, particularly as they are orthogonal to a uniform traction and thus applying the auxiliary tractions should not apply net forces to the system. However, as shown in Section IV, they are not well suited to describing the surface distortion. The orthonormal basis tractions we shall use apply a non-zero (instantaneous) force to the optic, leading to ill-conditioning of the FEM at very low frequencies. We thus used a traction frequency of \(\omega = 1\) Hz as the response is independent of frequency for frequencies well below the first resonance - see [12] for example. In all of our numerical tests, we assume a cylindrical fused silica optic with height \(h = \) 200 mm , radius \(R =\) 170 mm , \(E =731\) MPa, \( \nu = 0.17 \) and \(\alpha = 0.55 \times 10^{-6} \) K\textsuperscript{-1} . A radial cross section of the optic and the meshing used for the FEM is shown in Fig. \ref{figure1}. \begin{figure}[htbp] \centerline{\includegraphics[width=0.9\columnwidth]{fig1}} \caption{ A radial cross-section of the cylindrical optic, showing the mesh used for the FEM. The mesh consisted of \(32000\) nodes, and is finest on the heated top surface of the test mass.} \label{figure1} \end{figure} We assume heating of the top face by 1 W of power absorbed with a Gaussian-distributed flux: \begin{equation*} Q(x,y) = \frac{2}{\pi w^2} \exp\big[-2 \big((x-x_0)^2 +(y - y_0)^2\big)/w^2 \big] \end{equation*} where the beam radius \(w = 53\) mm, and radiative cooling of all surfaces of the optic to surroundings at 293 K. A thermal FEM \cite{COMSOL} is used to calculate the temperature distribution, \(T(\vec{r})\) , resulting from the heating. The displacement amplitude for each basis function, \(a_n\) , and the total displacement, \(u_z^T(\vec{r},t)\) , are then calculated using Eq. \ref{equation4} and Eq. \ref{equation8}. \section{Choice of orthonormal basis functions} Choosing a set of orthonormal functions \(\chi_n(\vec{r})\) that can describe the surface distortion without requiring a large number of functions, which would necessarily include high spatial frequencies, is crucial as it reduces both the number of auxiliary tractions that must be evaluated and the requirement for using a fine mesh in the FEM. Thus, we describe the choice of basis functions for on-axis and off-axis heating of the optic. \subsection{Orthonormal basis for on-axis heating \( (x_0 = 0, y_0 = 0)\) } Zernike polynomials (see Appendix A) are often used to describe cylindrically symmetric optical aberrations, as they are orthogonal over a circular disc and can be normalized. However, as shown in Figure \ref{figure2}, these polynomials are not well suited to describing the distortion. \begin{figure}[htbp] \centerline{\includegraphics[width=.9\columnwidth]{fig2}} \caption{Comparison of the surface distortion calculated using the elastodynamic FEM, \(u_{\textsc{\tiny FEM}}\) , the sum of the first six Zernike components \(u_{\textsc{\tiny Z}}\), and the sum of the first six orthonormalized LG components \(u_{\textsc{\tiny LG}}\)} \label{figure2} \end{figure} On-axis surface distortion due to the heating can also be described using Laguerre-Gauss (LG) functions: \begin{equation*} LG_p(r) = L_p \Big( \frac{2 r^2}{r_0^2}\Big) \exp \Big[\frac{-r^2}{r_0^2}\Big] \end{equation*} where \(L_p\) are Laguerre polynomials of order \(p:\{0,1,2...\}\) (see Appendix A), \(r\)} is the radial coordinate and \(r_0\) is a free parameter. These functions are orthogonal only over the infinite plane however. Symmetric orthogonalization \cite{schweinler1970orthogonalization} is therefore used, as outlined in Appendix B, to construct linear combinations, \(\chi_n\) , of LG functions that are orthonormal over the end face for a given \(r_0\) . In this type of orthogonalization, the difference between the new and original functions is minimized in the least-squares sense \cite{schweinler1970orthogonalization}. The optimum value of \(r_0\) was chosen as described in Appendix C, giving \(r_0 = 1.5w\) . The six lowest-order orthonomalized-LG functions are defined in Appendix D. A comparison of \(u_{\textsc{\tiny FEM}}\) and the sum of these components in Fig. \ref{figure2} shows that the LG basis is much superior to the Zernike basis. \subsection{Orthonormal basis for off-axis heating} The distortion due to off-axis heating can be described using the sets of functions listed below: (a) Hermite-Gauss (HG) functions: \begin{equation*} HG_{mn}(x,y) = H_m \Big( \frac{\sqrt{2} x}{r_{0x}}\Big) \exp \Big[\frac{-x^2}{r_{0x}^2}\Big] H_n \Big( \frac{\sqrt{2} y}{r_{0y}}\Big) \exp \Big[\frac{-y^2}{r_{0y}^2}\Big] \end{equation*} where \(H_i\) are the (“physicists”) Hermite polynomials of order \(i:\{0,1,2,...\}\) (see Appendix A). These functions are orthogonal over the interval \( x,y:(-\infty,\infty) \) . We choose \(r_{0x} = r_{0y} \equiv r_{0}\) as the heat flux has a circular cross section and we shall use \(x_0,y_0 << R \) , and thus \begin{equation*} HG_{mn}(x,y) = H_m \Big( \frac{\sqrt{2} x}{r_{0}}\Big) H_n \Big( \frac{\sqrt{2} y}{r_{0}}\Big) \exp \Big(\frac{-(x^2 + y^2)}{r_{0}^2}\Big) \end{equation*} (b) Generalized LG functions: \begin{equation*} LG^l_p(r) = L_p \Big(\frac{2 r^2}{r_0^2}\Big) \exp \Big[\frac{-r^2}{r_0^2}\Big] \begin{cases} 1 \\ \sin{l \phi} \\ \cos{l \phi} \end{cases} \end{equation*} where \(\phi\) is the azimuthal angle, and \(l:\{1,2,3,...\}\) for \(p>0\) . We restricted the azimuthal dependence to \(l = 1\) due to the symmetry of the expected distortion. Orthonormalized HG and generalized-LG functions were constructed, and an optimized value of \(r_0 = 1.4 w\) was selected as discussed above. HG functions up to \( m+n = 15 \) (136 functions in total) were initially used to describe the distortion due to a heating beam that was displaced from the center of the optic according to \( (x_0,y_0)= \)(0,10 mm), (10 mm, 0) and (8.7 mm, 5 mm). In each case, the distortion was dominated by the same 17 components, the functions for which are plotted in Appendix E. A comparison of \(u_{\textsc{\tiny FEM}}\) and the sum of the dominant 19 components is shown in Fig. \ref{figure4} . \begin{figure}[htbp] \centerline{\includegraphics[width=.9\columnwidth]{fig3}} \caption{Comparison of \(u_{\textsc{\tiny FEM}}\) and the sum of the 17 dominant orthonormalized-HG components \(u^T\) for a heat flux offset of 10 mm } \label{figure4} \end{figure} Orthonormalized generalized-LG functions up to \( p = 5 \) (16 functions in total) were also generated and used to describe the distortion due to a heat flux displaced from the center of the optic by 10 mm, but they yielded slightly poorer agreement with \(u_{\textsc{\tiny FEM}}\) . In addition, since the lower order orthonormalized-HG functions appear similar to the TEM$_{01}$ and TEM$_{10}$ eigenmodes observed in optical cavities, we chose to use that basis. \section{Surface distortion calculated using reciprocity} We now show how to use the orthonormal bases described above with reciprocity to determine the surface distortion. In each case, the equilibrium \( \varepsilon^A_{ij}(\vec{r})\) values were calculated for the basis tractions and then combined with the temperature distribution \(T(\vec{r})\) from the thermal FEM to yield the amplitudes \(a_n\). \subsection{On-axis heating: Zernike basis} While Zernike polynomials are not appropriate for describing the surface distortion in the example presented here, they can be used for a reciprocity-based calculation. Table I shows a comparison of the reciprocity Zernike amplitudes with those calculated by decomposing the distortion predicted by the thermoelastostatic FEM. \begin{table} \begin{tabular}{ |c | c | c| } \hline Zernike polynomial & a\(_\text{n}\)(nm) & a\(_\text{n,FEM}\)(nm) \\ \hline Z02 & 42.6 & 42.9 \\ \hline Z04 & -15.2 & -15.0 \\ \hline Z06 & 6.3 & 5.9 \\ \hline Z08 & -2.8 & -3.6 \\ \hline \end{tabular} \label{table1} \caption{Zernike amplitudes calculated using reciprocity, \(a_n\) , and thermoelastostatic FEM, \(a_{n,FEM}\) , for the axisymmetric Gaussian heat flux.} \end{table} \subsection{On-axis heating: orthonormalized-LG basis} The \(u_{\textsc{\tiny FEM}}\) and \(u^T = \sum\limits_{n=0}^5 a_n \chi_n \), and the difference between the two curves are plotted in Fig. \ref{figure5}. Since we are not interested in the average displacement of the optics, we have set \(u^T =u_{\textsc{\tiny FEM}} \) at $r=0$. The asymmetry of the difference is due to non-ideal cylindrical symmetry in the FEM meshing. \begin{figure}[htbp] \centerline{\includegraphics[width=.9\columnwidth]{fig4a}} \centerline{\includegraphics[width=.9\columnwidth]{fig4b}} \caption{ a) A plot of \(u_{\textsc{\tiny FEM}}\) and \(u^T\) calculated for the on-axis heating using the 6 lowest-order orthonormalized-LG functions. b) A plot of \(u_{\textsc{\tiny FEM}} - u^T\) . } \label{figure5} \end{figure} \subsection{Off-axis heating: orthonormalized-HG basis} The \(u_{\textsc{\tiny FEM}}\) and \(u^T = \sum\limits_{n = 1}^{19} a_n \chi_n \) and the difference between the two curves are plotted in Fig. \ref{figure6} \begin{figure}[htbp] \centerline{\includegraphics[width=.9\columnwidth]{fig5a}} \centerline{\includegraphics[width=.9\columnwidth]{fig5b}} \caption{a) A plot of \(u_{\textsc{\tiny FEM}}\) and \(u^T\) calculated for the off-axis heating using the 17 dominant orthonormalized-HG functions. b) A plot of \(u_{\textsc{\tiny FEM}} - u^T\).} \label{figure6} \end{figure} Figures \ref{figure5} and \ref{figure6} show that even though $<20$ auxiliary tractions were used to characterize the optic and the FEM was restricted to only 30,000 nodes, \begin{itemize} \item Elastodynamic reciprocity predicts $u^T$ within $ <1.5 \% $ of \(u_{\textsc{\tiny FEM}}\) over the majority of the incident laser beam \item Displacing the beam by $20\%$ of its radius does not degrade the agreement. \end{itemize} Additionally, increasing the number of auxiliary tractions further improves the agreement, particularly at large radius. \section{Comparison of Computational Times} We compare here the times required to calculate the surface distortion using our hybrid FEM-reciprocity approach and using a conventional thermo-elastic FEM analysis. The times are specific to the example of partial absorption of a Gaussian-intensity-profile light beam by the surface of an isotropic cylindrical optic. In both cases we use 32,000 nodes in the FEM calculations. We have not yet investigated how many nodes or auxiliary tractions are required to achieve a particular accuracy for each approach, or how this might affect the computational times. As discussed earlier, our hybrid FEM-reciprocity approach consists of two parts, the first of which is done only once for an optic: \begin{enumerate} \item \begin{enumerate} \item Calculate the elastic response of the optic to each of the orthonormal tractions, and store these arrays in memory. Here, this consisted of a 32,000 long 6-element array in which the 3D coordinates and strains at each node were recorded for each traction. This part required approximately 1 hour per traction. \item Upload 20 responses into memory in preparation for part 2 required 20 minutes. \end{enumerate} \item At each epoch of interest \begin{enumerate} \item Calculate the thermal induced stress at each node using FEM: 90 seconds \item Evaluate the volume integral for each traction component using Eq. (6): 3 seconds per traction. Thus for a serial calculation with 20 tractions, this step required 60 seconds. \end{enumerate} \end{enumerate} A conventional thermo-elastic FEM calculation for this simple problem required about 13 minutes. Thus, once the response of the optic has been determined and uploaded, the hybrid FEM-reciprocity calculation is at 5.2 times faster is using a serial calculation, and 8.7 times faster if using a parallelized calculation of the distortion using reciprocity. \section{Conclusion} We have shown how Betti-Maxwell reciprocity can be used in combination with thermal finite-element modeling to calculate the thermoelastic distortion of a linear elastic system. As an example, we described in detail its application to calculating the distortion of the end face of an isotropic cylindrical glass optic heated by an off-axis Gaussian laser beam. Despite using less than 20 auxiliary eigenfunction tractions to characterize the optic, the distortion calculated using reciprocity agrees to $<1.5 $ \% with that calculated using a full thermoelastic FEM over the majority of the incident beam. The computational time required for the reciprocity approach was a factor of 5-8 less than that for the full FEM once the optic had been characterized. The advantage of this approach will thus be most evident in cases where the elastic distortion must be calculated frequently, such as in feed-forward control of systems with long thermal time constants for example. Parallelization of the reciprocity calculation would also allow further improvements to the accuracy by employing additional tractions but with negligible additional computational cost. Our reciprocity approach can be applied to systems with arbitrarily distributed heat fluxes and asymmetric anisotropic elastic bodies. Furthermore, while our example assumed a free optic, other boundary conditions could easily be incorporated into the analysis with an appropriate set of auxiliary eigenfunctions. \section{Acknowledgements} This research was supported by the Australian Research Council (ARC). Yuri Levin is supported by an ARC Future Fellowship
{ "timestamp": "2015-04-20T02:09:08", "yymm": "1504", "arxiv_id": "1504.03266", "language": "en", "url": "https://arxiv.org/abs/1504.03266" }
\section{Introduction}\label{section:intro} In 1962, Fox included the following question in his list of problems in knot theory~\cite{fox:problems}. \begin{question} Which slice knots and weakly slice links can appear as the cross-sections of the unknotted $S^2$ in $S^4$. \end{question} \noindent Such a knot is called \emph{doubly slice}. Many of the techniques that have been successful in the study of slice knots and knot concordance over the last 50 years have applications to the study of doubly slice knots and double null concordance of knots. Nevertheless, doubly slice knots remain far less understood than their slice counterparts. The goal of this note is to address Fox's question for prime knots with 12 or fewer crossings. A precedent for this work was set in 1971 when Sumners showed that for prime knots with nine or fewer crossings, there is only one prime doubly slice knot, namely, the knot $9_{46}$~\cite{sumners:invertible}. There are 158 known prime slice knots with 12 or fewer crossings, and it is unknown whether the knot $11_{n34}$ is slice. Of these 159 knots, we show that at least 20, but no more than 24, are smoothly doubly slice. \begin{theorem}\label{thm:Smooth} The following knots are smoothly doubly slice. $$ \begin{array}{lllllll} 9_{46} & 10_{99} & 10_{123} & 10_{155} & 11_{n42} & 11_{n49} & 11_{n74} \\ 12_{a0427} & 12_{a1105} & 12_{n0268} & 12_{n0309} & 12_{n0313} & 12_{n0397} & 12_{n0414} \\ 12_{n0430} & 12_{n0605} & 12_{n0636} & 12_{n0706} & 12_{n0817} & 12_{n0838} \end{array} $$ Furthermore, the following are the only other prime knots with 12 or fewer crossings that could possibly be smoothly doubly slice. $$ \begin{array}{lllllll} 11_{n34} & 11_{n73} & 12_{a1019} & 12_{a1202} \end{array} $$ \end{theorem} Our contributions to this computation include the following. \begin{itemize} \item The first application of twisted Alexander polynomials to obstruct double sliceness. \item The first low-crossing examples of slice knots with non-vanishing signature function. \item Explicit constructions of unknotted embeddings of $S^2$ into $S^4$ with equatorial cross-section isotopic to each of the 20 knots on the list. \end{itemize} As mentioned above, $9_{46}$ was first double sliced by Sumners~\cite{sumners:invertible}, while $11_{n42}$ was shown to be doubly slice in~\cite{carter-kamada-saito}. The double slicing of $10_{123}$ included below was shown to the second author by Donald, who has contributed to the study of double slice knots by studying the problem of embedding 3--manifolds into $S^4$~\cite{donald:embedding}. We show below that the Conway knot $11_{n34}$ is topologically doubly slice (see Section~\ref{section:topological}), but it is unknown whether it can be smoothly sliced or double sliced. \subsection{A brief history of doubly slice knots}\ The study of slice knots is naturally placed in the context of the concordance group $\mathcal{C}$ and the homomorphism $\phi\colon \mathcal{C} \to \mathcal{G}$, where $\mathcal{G}$ is the algebraic concordance group, defined and classified by Levine~\cite{levine:invariants, levine:groups}. There are analogous groups $\mathcal{C}_{ds}$ and $\mathcal{G}_{ds}$ defined in the context of doubly slice knots; however, Levine's classification of $\mathcal{G}$ does not carry over to $\mathcal{G}_{ds}$, and there are other complications that make $\mathcal{C}_{ds}$ and $\mathcal{G}_{ds}$ difficult to study. It is known that the kernel of the canonical map $\mathcal{G}_{ds} \to \mathcal{G}$ in infinitely generated~\cite{cha-liv:signature}, but beyond that, the structure of $\mathcal{G}_{ds}$ remains a mystery. (See, however,~\cite{stoltzfus-bayer-fluckiger, stoltzfus:unraveling,stoltzfus:double}.) Furthermore, it can be shown using Casson-Gordon invariants that there are algebraically doubly slice knots that are not topologically doubly slice~\cite{gilmer-livingston:embedding}. Friedl developed further metabelian invariants that can be used to obstruct double sliceness~\cite{friedl:eta}. As in the study of slice knots, there is an important distinction between the smooth and topologically locally flat categories. However, this distinction does not feature prominently in our work here; we find no low-crossing examples of knots that are topologically doubly slice but not smoothly doubly slice, even though such knots have been shown to exist~\cite{meier:double}. Other interesting constructions in the study of doubly slice knots include the fibered examples of Aitchison-Silver~\cite{ait-silver} and the extension of the Cochran-Teichner-Orr filtration to topologically doubly slice knots by Kim~\cite{kim:new}. \subsection{Organization}\ In Sections~\ref{section:algebraic} and~\ref{section:topological}, we discuss obstructions to double slicing knots coming from the algebraic and topological categories, respectively. In Section~\ref{section:slicing}, we discuss some techniques that can be used to construct double slicings of knots in either the topological or smooth categories. In Section~\ref{sec:genus}, we place the study of doubly slice knots in context by considering knots as cross-sections of unknotted surfaces in $S^4$. \section{Algebraic obstructions to double slicing knots}\label{section:algebraic} In this section, we will present three algebraic obstructions to double slicing a knot. These are applied to obtain an initial list of prime knots with at most twelve crossings that could potentially be doubly slice. \subsection{Hyperbolic torsion coefficients}\ A knot $K$ in $S^3$ is said to be \emph{algebraically doubly slice} if there exists a Seifert matrix $A_K$ for $K$ that has the form $$A_K = \begin{bmatrix} 0 & B_1 \\ B_2 & 0 \end{bmatrix},$$ where $B_1$ and $B_2$ are square matrices of equal dimension. Matrices of this form are called \emph{hyperbolic} and have been studied by Levine~\cite{levine:hyperbolic} and others~\cite{cha-liv:signature,stoltzfus:double}. If $K$ is (smoothly or topologically) doubly slice, then $K$ is algebraically double slice~\cite{sumners:invertible}. Let $A_K$ be a hyperbolic Seifert matrix for $K$. Then, $$A_K+A_K^T = \begin{bmatrix} 0 & B \\ B^T & 0 \end{bmatrix},$$ where $B=B_1+B_2^T$. The matrix $B \oplus B$ is a presentation matrix for $H_1(\Sigma_2(K))$. It follows that $H_1(\Sigma_2(K))$ splits as a direct sum $G \oplus G$, where $G$ is presented by the matrix $B$. Thus, we have our first obstruction. \begin{proposition}\label{prop:hom} Let $K$ be a knot in $S^3$. If $K$ is algebraically doubly slice, then, for some finite group $G$, $H_1(\Sigma_2(K))=G\oplus G$. \end{proposition} Of the 2,977 prime knots with at most 12 crossings, 62 knots satisfy Proposition~\ref{prop:hom}. Furthermore, if $K$ is algebraically doubly slice, then $K$ is algebraically slice. Among these 62 knots, there are 36 that are algebraically slice. These knots form our short-list of candidates to be algebraically doubly slice and are shown below. $$ \begin{array}{lllllllll} 9_{41} & 9_{46} & 10_{99} & 10_{123} & 10_{153} & 10_{155} & 11_{n34} & 11_{n42} \\ 11_{n49} & 11_{n73} & 11_{n74} & 11_{n116} & 12_{a0427} & 12_{a1019} & 12_{a1105} & 12_{a1202} \\ 12_{n0019} & 12_{n0210} & 12_{n0214} & 12_{n0257} & 12_{n0268} & 12_{n0309} & 12_{n0313} & 12_{n0318} \\ 12_{n0397} & 12_{n0414} & 12_{n0430} & 12_{n0440} & 12_{n0582} & 12_{n0605} & 12_{n0636} & 12_{n0706} \\ 12_{n0813} & 12_{n0817} & 12_{n0838} & 12_{n0876} \end{array} $$ \subsection{The signature function}\label{subsec:signature}\ Let $K$ be a knot in $S^3$ with Seifert matrix $A_K$. Let $\omega$ be a unit complex number, and consider the matrix $$(1-\omega)A_K+(1-\overline\omega)A_K^T.$$ Denote by $\sigma_\omega(K)$ the signature of this matrix. Note that this matrix will be non-singular provided that $\Delta_K(\omega)\not=0$, where $\Delta_K(t)$ is the Alexander polynomial of $K$. In any event, $\sigma_K(\omega)$ is a well-defined knot invariant for any unit complex number $\omega$. See~\cite{gordon:knot_theory} for details. It is well-known that $|\sigma_K(\omega)|\leq 2g_4(K)$ whenever $\Delta_K(\sigma)\not=0$. Thus, if $K$ is algebraically slice, then $\sigma_\omega(K)=0$ away from the roots of the Alexander polynomial. Moreover, we have the following. \begin{proposition}\label{prop:sign} Let $K$ be a knot in $S^3$. If $K$ is algebraically doubly slice, then $\sigma_\omega(K)=0$ for any unit complex number $\omega$. \end{proposition} In fact, we can consider these signature invariants as a function $\sigma(K):S^1\to\mathbb{Z}$, defined by $\sigma(K)(\omega)=\sigma_\omega(K)$, called the \emph{signature function}. If a knot $K$ satisfies Proposition~\ref{prop:sign}, we say that the signature function for $K$ \emph{vanishes}. \begin{example} Let $K=12_{n0582}$. Then, $\Delta_K(t) = (t^2-t+1)^2$, and the roots of $\Delta_K(t)$ are contained on the unit circle. Since $K$ is slice, we know that $\sigma_K(\omega)=0$ away from these roots. However, if we consider the roots, $\zeta$ and $\overline\zeta$, where $\zeta$ is a sixth root of unity, we can compute that $\sigma_{\zeta}(K)=\sigma_{\overline\zeta}(K)=-1$. (Note that this calculation depends on a Seifert matrix $A_K$, but any choice will do and we do not include the details here.) It follows from Proposition~\ref{prop:sign} that $K$ cannot be algebraically doubly slice. \end{example} \begin{example} Let $K=12_{n0813}$. Then, $\Delta_K(t) = (t-2)(2t-1)(t^2-t+1)^2$. Two of the roots of $\Delta_K(t)$ are primitive sixth roots of unity; the other two roots do not lie on the unit circle, so no information can be gained by considering them. If we consider the roots of unity, we find that $\sigma_{\zeta}(K)=\sigma_{\overline\zeta}(K)=+1$. (Again, we have used some matrix $A_K$ for this calculation.) It follows from Proposition~\ref{prop:sign} that $K$ cannot be algebraically doubly slice. \end{example} Thus, we remove $12_{n0582}$ and $12_{n0813}$ from our list of potentially algebraically doubly slice knots. \subsection{The Alexander module}\label{subsec:module}\ Continuing, let $K$ be a knot in $S^3$ and let $X_\infty(K)$ denote the infinite cyclic cover of $S^3\setminus K$. The group $H_1(X_\infty(K))$ can be regarded as a $\Lambda$--module, where $\Lambda = \mathbb{Z}[t,t^{-1}]$. This $\Lambda$--module is called the \emph{Alexander module} and is presented by the matrix $V_K=A_K-tA_K^T$. Sumners obstructed $9_{41}$ from being doubly slice by carefully analyzing the module structure of $H_1(X_\infty(K))$. We follow a similar approach to analyze two more knots. We begin by switching to coefficients in the finite field with $p$ elements, ${\mathbb{Z}_p}$. In this case, $ H_1(X_\infty(K),{\mathbb{Z}_p} )$ is a module over a PID, $\Lambda_p = {\mathbb{Z}_p}[t,t^{-1}]$. We now have that if $K$ is doubly slice, then as a $\Lambda_p$--module, $$H_1(X_\infty(K),{\mathbb{Z}_p} )\cong \bigoplus_i \left( \Lambda_p / \left<f_i(t) \right> \oplus \Lambda_p / \left<f_i(t^{-1}) \right>\right) $$ for some set of polynomials $f_i(t) \in \Lambda_p$. \begin{example} Let $K=11_{n116}$, which has $\Delta_K(t) = (1+t-t^2)(-1+t+t^2)$. Using the Seifert form $V_K$ taken from KnotInfo~\cite{cha-liv:knotinfo} and working with ${\mathbb{Z}_2} $--coefficients, we find that as a $\Lambda_2$--module, $$ H_1(X_\infty(K),{\mathbb{Z}_2} )\cong \Lambda_2 / \left<( 1+t+t^2)^2 \right> .$$ This does not decompose as a nontrivial direct sum of modules, so $K=11_{n116}$ cannot be doubly slice. \end{example} \begin{example} Let $K=12_{n0876}$, which has $\Delta_K(t) = (-2+4t-2t^2+t^3)(-1+2t-4t^2+2t^3)$. Again using the Seifert form $V_K$ taken from KnotInfo, but now working with ${\mathbb{Z}_3}$--coefficients, we compute that as a $\Lambda_3$--module, $$ H_1(X_\infty(K),{\mathbb{Z}_3} )\cong \Lambda_3/ \left<( 1+t )^2 \right> \oplus \Lambda_3/ \left<( 1 + t^2 )^2 \right> .$$ This does not decompose further, so $12_{n0876}$ cannot be doubly slice. \end{example} \subsection{Algebraic conclusions}\ In conclusion, consideration of the torsion invariants reduced our search for doubly slice knots to a set of $36$ knots. An analysis of the signature function removed another two, and an examination of Alexander modules eliminate three more, including the one found by Sumners. Of the remaining 31 knots, we will use the techniques described in Section~\ref{section:slicing} to show that one is topologically doubly slice and 20 are smoothly doubly slice. It follows that these 21 knots are algebraically doubly slice, leaving us with only 10 knots that may or may not be algebraically doubly slice. \begin{question} Are any of the following knots algebraically doubly slice? $$ \begin{array}{lllllllll} 10_{153} & 11_{n73} & 12_{a1019} & 12_{a1202} & 12_{n0019} \\ 12_{n0210} & 12_{n0214} & 12_{n0257} & 12_{n0318} & 12_{n0440} \\ \end{array} $$\vskip.2in \end{question} \section{Topological obstructions to double slicing knots}\label{section:topological} We now move from abelian to metabelian invariants. We begin by quickly recalling the twisted polynomial. Let $M_q(K)$ be the $q$--fold cyclic cover of $S^3\setminus K$, let $\Sigma_q(K)$ be the branched cyclic cover, and let $\rho \colon H_1(\Sigma_q(K)) \to \mathbb{Z}_p$ be a homomorphism, where $q$ is a prime power and $p$ is an odd prime. Let $\Gamma_p=\mathbb{Q}(\zeta_p)[t,t^{-1}]$, where $\zeta_p$ is a primitive $p^\text{th}$--root of unity. As described in~\cite{kirk-liv:twisted}, there is an associated {\it twisted Alexander polynomial} $\Delta_{K,\rho}(t) \in \Gamma_p$. This polynomial is well-defined up to multiplication by a unit in $\Gamma_p$. Given $f(t)\in\Gamma_p$, let $\overline{f(t)}$ denote the result of complex conjugation of the coefficients of $f(t)$. A result of~\cite{kirk-liv:twisted} states the following. \begin{theorem}\label{thm:kl}If $K$ is slice, then there is a subgroup $H \subset H_1(\Sigma_q(K))$ satisfying the following properties. \begin{enumerate} \item $|H|^2 = |H_1(\Sigma_q(K))|.$\vskip.05in \item The subgroup $H$ is invariant under the action of the deck transformation of $\Sigma_q(K)$. \vskip.05in \item For all $\rho \colon H_1(\Sigma_q(K)) \to \mathbb{Z}_p$ satisfying $\rho(H) = 0$, one has $\Delta_{K,\rho}(t) = f(t)\overline{f(t^{-1})}$. \end{enumerate} \end{theorem} If $K$ is doubly slice, then it satisfies strengthened conditions. \begin{theorem}\label{thm:kl-double} If $K$ is doubly slice, then there is a splitting $ H_1(\Sigma_q(K)) \cong H_1 \oplus H_2$ satisfying the following properties. \begin{enumerate} \item $H_1 \cong H_2$.\vskip.05in \item The subgroups $H_1$ and $H_2$ are invariant under the action of the deck transformation of $\Sigma_q(K)$. \vskip.05in \item For all $\rho \colon H_1(\Sigma_q(K)) \to \mathbb{Z}_p$ for which $\rho(H_1) = 0$ or $\rho(H_2) = 0$, one has that $\Delta_{K,\rho}(t) = f(t)\overline{f(t^{-1})}$. \end{enumerate} \end{theorem} \begin{proof}The proof is very similar to that of Theorem~\ref{thm:kl} in~\cite{kirk-liv:twisted}, so we just summarize it here. In Theorem~\ref{thm:kl}, the subgroup $H$ can be taken as the kernel of the inclusion $\Sigma_q(K) \to \overline{W}_q(D)$, where $\overline{W}_q(D)$ is the $q$--fold branched cover of $B^4$ over a slice disk $D$ of $K$. In the case that $K$ is doubly slice, the $q$--fold branched cover $\Sigma_q(K)$ embeds in $S^4$, since $S^4$ is the the $q$--fold branched cover of $S^4$ over the (unknotted) double slicing 2--sphere for $K$. It follows that $\Sigma_q(K)$ splits $S^4$ into manifolds $Y_1$ and $Y_2$. The subgroups $H_1$ and $H_2$ can be taken as the kernels of the inclusions $H_1(\Sigma_q(K)) \to Y_1$ and $H_1(\Sigma_q(K)) \to Y_2$. The direct sum decomposition arises from the Meyer-Vietoris Theorem; the fact that $H_1 \cong H_2$ follows from duality, as first noticed by Hantzche~\cite{hantzche}. The rest of the argument follows identically to that in~\cite{kirk-liv:twisted}. \end{proof} Equipped with Theorem~\ref{thm:kl-double}, we are ready to prove our second result. \begin{theorem}\label{thm:NotTop} The following knots are not topologically doubly slice, but might be algebraically doubly slice. $$ \begin{array}{lllllll} 10_{153} & 12_{n0019} &12_{n0210}& 12_{n0214} \\ 12_{n0257} & 12_{n0318} & 12_{n0440} \end{array} $$ \end{theorem} \begin{proof} The proof is nearly identical in each case, so we describe only one case in detail. Let $K=10_{153}$. Then $H_1(\Sigma_3(K))\cong(\mathbb{Z}_7)^2$, and the action of the deck transformation splits the homology as $E_2\oplus E_4$. Here $E_2$ is the $2$--eigenspace of the action of the deck transformation on $ H_1(\Sigma_3(K))$ and $E_4$ is the $4$--eigenspace. Notice that $2^3 = 4^3 = 1 \mod 7$. Let $\rho_2:(\mathbb{Z}_7)^2\to E_2$ denote projection onto $E_2$, so $\rho_2\vert_{E_4}\equiv 0$, and let $\Delta_{K,\rho_2}(t)$ denote the associated twisted Alexander polynomial. Then, we have $$ \Delta_{K,\rho_2}(t)=(-t^2+\omega t+1)(-t^2+\omega t+1),$$ where $\omega=\zeta^4+\zeta^2+\zeta+1$ for a $7^\text{th}$--root of unity $\zeta$. One easily checks that $\omega=\overline\omega$, so $\Delta_{K,\rho_2}(t) = f(t)\overline{f(t)}$. On the other hand, if one considers the other projection $\rho_4:H_1(\Sigma_3(K))\to E_4$, so that $\rho_4\vert_{E_2}\equiv 0$, one finds that the associated twisted polynomial is given by $$\Delta_{K,\rho_4}(t)=t^4+3t^2+1.$$ The following lemma states that $t^4+3t^2+1$ is irreducible in $\Gamma_7$. It follows from Theorem~\ref{thm:kl-double} that $K$ cannot be topologically doubly slice, since the twisted polynomials associated to this metabolizing representation do not factor as norms. \begin{lemma} The polynomial $p(t)=t^4+3t^2+1$ is irreducible in $\Gamma_7$. \end{lemma} \begin{proof} If $\alpha \in \mathbb{Q}(\zeta_7)$ is a root of $p(t)$, then so is $\alpha^{-1}$. Thus, if $p(t)$ has a linear factor, it has two distinct linear factors, and hence it has a quadratic factor. So, suppose that $p(t)$ factors into two quadratic polynomials. One can assume the factorization is of the form $$p(t) = (t^2 + at + b)(t^2 + a't +b').$$ By examining coefficients, the factorization further simplifies to be of the form $$p(t) = (t^2 + at + b)(t^2- at +b),$$ where $b= \pm 1$ and $a^2 = 2b-3$. If $b = 1$, then $a^2 = -1$. If $b = -1$, then $a^2 = -5$. Thus, the proof is completed by showing that ${\mathbb{Q}(\zeta_7)}$ contains neither $\sqrt{-1} $ nor $\sqrt{-5}$. The Galois group of ${\mathbb{Q}(\zeta_p)}$ is cyclic, isomorphic to $ {\mathbb{Z}}_{p-1}$, and thus contains a unique index two subgroup. If follows that ${\mathbb{Q}(\zeta_p)}$ contains a unique quadratic extension of $ {\mathbb{Q}}$. A standard result in number theory (see~\cite{marcus}) states that this field is ${\mathbb{Q}}(\sqrt{ p})$ or ${\mathbb{Q}}(\sqrt{ -p})$, depending on whether $p$ is congruent to 1 or 3 modulo 4, respectively. This quickly yields the desired contradiction; for instance, it is clear that $ {\mathbb{Q}}(\sqrt{ -5}) \not \subseteq {\mathbb{Q}}(\sqrt{ -7})$. \end{proof} It follows from Theorem~\ref{thm:kl-double} that $K$ cannot be topologically doubly slice, since the twisted polynomials associated to this metabolizing representation does not factor as a norm. The general proof of Theorem~\ref{thm:NotTop} proceeds by checking that each of the relevant twisted Alexander polynomials does not factor as a norm. The pertinent information needed to verify the result for the other knots is described in Table~\ref{table:Twisted}. The Maple program developed in conjunction with~\cite{herald-kirk-liv:twisted} was used to find the twisted polynomials and Maple could also be used to check the factoring conditions. The knot $12_{n0210}$ was shown to not to be topologically slice in~\cite{herald-kirk-liv:twisted} using twisted polynomials, and hence it is not topologically doubly slice. \end{proof} \begin{table}[h!]\label{table:Twisted} \renewcommand{\arraystretch}{2.5} \tiny \begin{tabular}{|c|c|c|} \hline \textbf{Knot} & \textbf{Cover Homology} & \textbf{Irreducible Twisted Polynomial} \\ \hline $10_{153}$ & $\Sigma_3(K)\cong(\mathbb{Z}_7)^2$ & $\Delta_{K,\rho_4}(t)=t^4+3t^2+1$ \\ \hline $12_{n0019}$ & $\Sigma_3(K)\cong(\mathbb{Z}_{13})^2$ & $\Delta_{K,\rho_9}(t) = t^4 +2t^2+1 $ \\ & $\zeta^{13}=1$ & $+t^3\left(\zeta^{11}+\zeta^9+\zeta^8+\zeta^7+2\zeta^6+2\zeta^5+\zeta^3+2\zeta^2+\zeta+1\right)$ \\ && $+t\left(\zeta^{11}-\zeta^9+\zeta^8+\zeta^7-\zeta^3-\zeta\right)$ \\ \hline $12_{n0214}$ & $\Sigma_3(K)\cong(\mathbb{Z}_7)^2$ & $\Delta_{K,\rho_2}(t)= -29 t^4 +\left(31+ 8 \zeta + 8 \zeta^2 + 8 \zeta^4\right) $ \\ & $\zeta^7=1$ & $+ t^3 \left(-27 + 37 \zeta + 37 \zeta^2 + 37 \zeta^4\right)$ \\ && $ + t \left(48 + 47 \zeta + 47 \zeta^2 + 47 \zeta^4\right) $ \\ && $+ t^2 \left(17 + 68 \zeta + 68 \zeta^2 + 68 \zeta^4\right)$ \\ \hline $12_{n0257}$ & $\Sigma_3(K)\cong(\mathbb{Z}_{13})^2$ & $\Delta_{K,\rho_9}(t)=- 13 t^4 +13 $\\ & $\zeta^{13}=1$ & $+ t^3 \left(37 + 48 \zeta + 21 \zeta^2 + 48 \zeta^3 + 21 \zeta^5 + 21 \zeta^6 + 14 \zeta^7 + 14 \zeta^8 + 48 \zeta^9 + 14 \zeta^{11}\right)$ \\ && $+ t^2 \left(39 + 78 \zeta + 13 \zeta^2 + 78 \zeta^3 + 13 \zeta^5 + 13 \zeta^6 + 65 \zeta^7 + 65 \zeta^8 + 78 \zeta^9 + 65 \zeta^{11}\right)$ \\ && $ +t \left(11 + 48 \zeta + 34 \zeta^2 + 48 \zeta^3 + 34 \zeta^5 + 34 \zeta^6 + 27 \zeta^7 + 27 \zeta^8 + 48 \zeta^9 + 27 \zeta^{11}\right) $ \\ \hline $12_{n0318}$ & $\Sigma_3(K)\cong(\mathbb{Z}_7)^2$ & $\Delta_{K,\rho_2}(t) = 1 + 3 t^2 + t^4$ \\ & $\zeta^7=1$ & $ + t \left(3 - \zeta - \zeta^2 - \zeta^4\right) $ \\ && $+ t^3 \left(4 + \zeta + \zeta^2 + \zeta^4\right)$ \\ \hline $12_{n0440}$ & $\Sigma_3(K)$ & $\Delta_{K,\rho_2}(t) = t^4-3t^3+6t^2-3t+1$ \\ & $\cong(\mathbb{Z}_2)^4\oplus(\mathbb{Z}_7)^2$ & \\ \hline \end{tabular}\vskip.1in \caption{Twisted Alexander polynomial calculations.} \end{table} \section{Double slicing knots}\label{section:slicing} In this section, we discuss some techniques that can be used to show that a knot is doubly slice. We will address the issue of double sliceness in both the smooth and the locally flat settings. \subsection{Band systems}\label{subsec:bands}\ In~\cite{donald:embedding}, Donald showed that if a knot can be sliced by two different sequences of band moves, and if the bands are related in a certain way, then combining the two ribbon disks yields an unknotted 2--sphere. In this section we present a concise treatment of this result. Let $L$ be a link in $S^3$ and let $b$ be the image of a 2--disk embedded in $S^3$ such that $L \cap b$ consists of two disjoint arcs in $\partial b$. We refer to such a $b$ as a {\it band} and denote by $L*b$ the link formed as the closure of $(L \cup \partial b) \setminus (L\cap b)$. Notice that $(L*b)*b = L$; also, if $b$ and $c$ are disjoint, then $(L*b)*c = (L*c)*b$, so we can write both as $L*b*c$. The reader should be familiar with the fact that the \emph{band move} $L \to L*b$ yields a cobordism from $L$ to $L*b$ in $S^3 \times [0,1]$. A sequence of $n$ such cobordisms from a knot $K$ to the unlink of $n + 1$ components yields a ribbon disk in $B^4$ formed as the union of the cobordism and disjoint disks bounded by the unlink. Two such sequences yield an embedded sphere formed as the union of the ribbon disks in $S^4 = B^4 \cup B^4$. If the sequences arise from single bands $b$ and $c$, we denote the knotted 2--sphere $(K, b, c)$. We have the following reinterpretations of two special cases of Donald's double slicing criterion~\cite{donald:embedding}. \begin{theorem}\label{thm:don1} If $K$ is a knot and $b$ and $c$ are disjoint bands for which $K*b $ is an unlink, $K*c$ is an unlink, and $K*b*c$ is an unknot, then $ (K, b , c )$ is unknotted. \end{theorem} \begin{proof} Write $U_2 = K*b$ and $U_2' = K*c$. Both are unlinks. Write $U_1= K*b*c$, which is an unknot. The surface $(K,b,c)$ corresponds to the sequence $$U_2 \to U_2*b = K \to K*c = U_2'.$$ Changing the order of the bands, this can be rewritten as $$U_2 \to U_2 *c \to U_2 *c *b.$$ Since $U_2 = K*b$, we can express this as $$U_2 \to K*b *c \to K*b *c *b.$$ Using the facts that $K*b*c = U_1$ and $K*b*c*b = K*b*b*c = K*c$, we finally rewrite the sequence as $U_2 \to U_1 \to U_2'$. According to Scharlemann~\cite{scharlemann}, a ribbon disk for the unknot with two minima is trivial. Thus, $(S,b,c)$ is the union of two trivial disks; hence it is the unknot. \end{proof} By iterating this approach, one can easily prove results such as the following. \begin{theorem}\label{thm:don2} Let $K$ be a knot with disjoint bands $a$, $b$, $c$, and $d$ and suppose that $K*a*b$ and $K*c*d$ are three component unlinks. In this case, there this an associated knotted sphere, $K(a,b,c,d)$. If $K*a*b*c$ and $K*a*c*d$ are unlinks of two components and $K*a*b*c*d$ is an unknot, then $K(a,b,c,d)$ is unknotted. \end{theorem} Note that Scharlemann's theorem is used above to show that certain slicing disks for the unknot are trivial. In each of the examples we consider, one can quickly show that the relevant slice disks for the unknot are trivial by observing that they are built using trivial band sums of the unlink; in particular, for our examples, one need not use the depth of Scharlemann's theorem. \subsection{Superslice knots}\label{subsec:superslice}\ A knot $K$ is called \emph{superslice} if there is a slice disk $D$ for $K$ such that the double of $D$ along $K$ is an unknotted 2--sphere in $S^4$. Suppose that $K$ is obtained by attaching a band $\upsilon$ to an unlink of two components. See Figure~\ref{fig:SuperRibbon12} for the pertinent three examples. Let $D_1$ and $D_2$ denote the standard pair of disks bounded by the two-component unlink. In this case, the union $D=D_1\cup\upsilon\cup D_2$ is an obvious ribbon disk for $K$. This disk is immersed in $S^3$ with ribbon singularities, but if we push the interiors of $D_1$ and $D_2$ into $B^4$, we obtain an embedded disk, still called $D$, with two minima and one saddle with respect to the standard radial Morse function. We can assume that $D$ is properly embedded by pushing the entire interior into $B^4$, but pushing the interiors of $D_1$ and $D_2$ in farther. Let $\mathcal{K}$ be the 2--knot obtained by doubling the disk $D$. That is, glue two copies of $(B^4,D)$ together along their common $(S^3, K)$ boundary (via the identity map) to get $(S^4, \mathcal{K})$. By construction, we see that $\mathcal{K}$ is formed by taking two unknotted 2--spheres $S_1$ and $S_2$ in $S^4$ and attaching a tube $\Upsilon$ that connects them. Here, $S_i$ is the double of $D_i$ and $\Upsilon$ is the double of $\upsilon$. \begin{figure}[h!] \centering \includegraphics[scale = .4]{Tubes.pdf} \caption{A local picture of a 2--knot isotopy that passes one tube through another.} \label{fig:TubePass} \end{figure} Suppose that locally, we see two pieces of $\Upsilon$ as in Figure~\ref{fig:TubePass}; there is an isotopy that passes these two pieces through each other, as shown. This isotopy corresponds to passing pieces of $\upsilon$ past each other. This changes the isotopy class of the band $\upsilon$, giving a new band $\upsilon'$ and a new ribbon knot $K'$, which is obtained by attaching $\upsilon'$ to the original unlink. Because this change resulted from an isotopy of $\mathcal{K}$, we see that both $K$ and $K'$ are cross-sections of $\mathcal{K}$. If $K'$ is unknotted, then $\mathcal{K}$ is unknotted, as in the proof of Theorem~\ref{thm:don1}, and we can conclude that $K$ is doubly slice. We summarize this with the following criterion. \begin{proposition}\label{prop:super} Let $K$ be a knot that is obtained by attaching a single band $\upsilon$ to an unlink of two components. Let $K'$ be the result of passing the band $\upsilon$ through itself as discussed above. If $K'$ is the unknot, then $K$ is smoothly superslice. In particular, if the band is relatively homotopic to a trivial band in the complement of a neighborhood of the unlink, then $K$ is smoothly superslice. \end{proposition} Figure~\ref{fig:SuperRibbon12} shows three examples of ribbon knots that satisfy the above criterion and can therefore be seen to be smoothly superslice. \begin{corollary} The isotopy class of $\mathcal{K}$ depends only on the homotopy class of the core of $\upsilon$. \end{corollary} \begin{figure}[h!] \centering \includegraphics[scale = .5]{SuperRibbon12.pdf} \caption{The above knots are smoothly superslice. See Subsection \ref{subsec:superslice}.} \label{fig:SuperRibbon12} \end{figure} \subsection{Freedman and the locally flat setting}\label{subsec:freedman}\ Let $K$ be a knot in $S^3$, and let $\Delta_K(t) $ denote the Alexander polynomial of $K$. It is a well-known consequence of the work of Freedman and Quinn that any knot $K$ with $\Delta_K(t) =1$ bounds a topologically locally flat disk in $B^4$~\cite{freedman:concordance, freedman-quinn}. In fact, a stronger, yet less well-known, fact is true. (See~\cite{meier:double} for more detail.) \begin{theorem}\label{thm:Freedman} Let $K$ be a knot in $S^3$. If $\Delta_K=1$, then $K$ is topologically superslice. \end{theorem} There are five knots, up to 12 crossing, with trivial Alexander polynomial. The first is the Conway knot $11_{n34}$. Theorem~\ref{thm:Freedman} shows that this knot is topologically doubly slice. Interestingly, it turns out that each of the other four knots is smoothly superslice; the double of the ribbon disk is an unknotted 2--sphere in $S^4$. Thus we are led to the following problem, which at the moment seems inaccessible. \begin{problem} Find a smoothly slice knot $K$ with $\Delta_K(t) =1$ that is not smoothly superslice. \end{problem} Superslice knots were first studied by Gordon-Sumners~\cite{gordon-sumners}, who showed that the double of any slice knot is superslice and that for any superslice knot $K$, $\Delta_K(t) = 1$. Superslice knots were also were studied in relation to the Property R Conjecture~\cite{brakes,gordon:satellite, kirby-melvin:R}. We remark that many infinite families of superslice knots can be created by taking any properly embedded arc in the complement of an unlink that is homotopic, but not isotopic, to the trivial arc connecting the two components and banding along the arc with some framing. Changing the framing produces infinitely many knots in each family which can be distinguished from each other by their Jones polynomials. For example, any of the three knots shown in Figure~\ref{fig:SuperRibbon12} gives rise to such a family by adding twists to the band in each case. \subsection{Proof of Theorem~\ref{thm:Smooth}}\ We are now equipped to prove our main result. \begin{reptheorem}{thm:Smooth} The following knots are smoothly doubly slice. $$ \begin{array}{lllllll} 9_{46} & 10_{99} & 10_{123} & 10_{155} & 11_{n42} & 11_{n49} & 11_{n74} \\ 12_{a0427} & 12_{a1105} & 12_{n0268} & 12_{n0309} & 12_{n0313} & 12_{n0397} & 12_{n0414} \\ 12_{n0430} & 12_{n0605} & 12_{n0636} & 12_{n0706} & 12_{n0817} & 12_{n0838} \end{array} $$ Furthermore, the following are the only other prime knots with 12 crossings or fewer that could possibly be smoothly doubly slice. $$ \begin{array}{lllllll} 11_{n34} & 11_{n73} & 12_{a1019} & 12_{a1202} \end{array} $$ \end{reptheorem} \begin{proof} The Kinoshita-Terasaka knot $11_{n42}$ was shown to be smoothly superslice in~\cite{carter-kamada-saito}. Figure~\ref{fig:SuperRibbon12} shows ribbon disks for $12_{n0313}$ and $12_{n0430}$. It is easy to see that that each knot satisfies the hypotheses of Proposition~\ref{prop:super}; therefore, each of these knots is smoothly superslice, hence smoothly doubly slice. The remaining 17 knots are shown in Figures~\ref{fig:TwoBand1},~\ref{fig:TwoBand2}, and~\ref{fig:TwoBand3}. With the exception of $12_{n0636}$, these knots all satisfy Theorem~\ref{thm:don1}. The knot $12_{n0636}$ requires a pair of two-band systems, and is smoothly doubly slice by Theorem~\ref{thm:don2}. (Note that the order in which the two-band systems are resolved doesn't matter in this case.) \end{proof} Portions of Theorem~\ref{thm:Smooth} were previously known: $9_{46}$ was first smoothly doubly sliced by Sumners~\cite{sumners:invertible}, $11_{n42}$ by Carter, Kamada, and Saito~\cite{carter-kamada-saito}, and $10_{123}$ (private communication) and $11_{n74}$ (in~\cite{donald:embedding}) by Donald. \begin{question}\label{question:Smooth} Are any of the following knots smoothly doubly slice? $$ \begin{array}{lllllll} 11_{n34} & 11_{n73} & 12_{a1019} & 12_{a1202} \end{array} $$ \vskip.05in \end{question} Recall that $11_{n34}$ is topologically doubly slice. Other than this, Question~\ref{question:Smooth} applies equally well in the topological setting and covers all possibilities. This completes our analysis. \section{The double slice genus of knots}\label{sec:genus} The study of doubly slice knots can be placed in the broader context of the relationship between knots in the 3--sphere and surfaces in the 4--sphere. In this section, we will briefly describe this more general setting. Let $\mathcal S$ be an orientable surface in $S^4$. We say that $\mathcal S$ is \emph{unknotted} if $\mathcal S$ bounds a handlebody $H$ in $S^4$. Let $\mathcal S$ be an unknotted surface in $S^4$, and suppose that $\mathcal S$ transversely intersects the standard $S^3$ in a knot $K$. We say that $K$ \emph{divides} $\mathcal S$. Let $K$ be a knot in $S^3$ and let $F$ be a Seifert surface for $K$ with $g(F)=g$. We think of $F\subset S^3\subset S^4$, where $S^3$ lies as the equator of $S^4$. Let $H=F\times[-1,1]$, with $H\cap S^3=F$; the surface $F$ is the intersection of a handlebody $H\subset S^4$ with $S^3$. Let $\mathcal S=\partial H$. Then, $\mathcal S$ is an unknotted surface in $S^4$ (by definition) and $K=\mathcal S\cap S^3$. It follows that every knot $K$ in $S^3$ divides an unknotted surface in $S^4$. Therefore, we define $$g_{ds}(K)=\min\{g(\mathcal S)\ |\ \mathcal S\subset S^4, \text{\ $\mathcal S$ unknotted, and } \mathcal S\cap S^3=K \}.$$ We call $g_{ds}(K)$ the \emph{double slice genus} of $K$. Note that $g_{ds}(K)=0$ if and only if $K$ is doubly slice. Furthermore, we saw above that $g_{ds}(K)\leq 2g_3(K)$. Similarly, it is clear that $2g_4(K)\leq g_{ds}(K)$. The restriction $2g_4(K)\leq g_{ds}(K)\leq 2g_3(K)$ is already enough to determine the double slice genus for a third of the knots up to nine crossings. A more detailed analysis will be the subject of future study by the authors. \begin{figure} \centering \includegraphics[scale = .5]{TwoBand1} \caption{The above knots are smoothly doubly slice. See Subsection~\ref{subsec:bands}.} \label{fig:TwoBand1} \end{figure} \begin{figure} \centering \includegraphics[scale = .43]{TwoBand2.pdf} \caption{The above knots are smoothly doubly slice. See Subsection~\ref{subsec:bands}.} \label{fig:TwoBand2} \end{figure} \begin{figure} \centering \includegraphics[scale = .4]{TwoBand3.pdf} \caption{The above knots are smoothly doubly slice. See Subsection~\ref{subsec:bands}.} \label{fig:TwoBand3} \end{figure} \clearpage \bibliographystyle{abbrv
{ "timestamp": "2015-04-15T02:01:47", "yymm": "1504", "arxiv_id": "1504.03368", "language": "en", "url": "https://arxiv.org/abs/1504.03368" }
\section{Introduction} \label{sec:intro} Advanced teleconferencing systems, smart rooms or surveillance and monitoring systems are example applications of distributed audio-visual sensor networks. For many tasks, such as automatic camera steering, events or objects of interest have to be localized either acoustically, visually or jointly, which in turn requires that the positions of the sensors need to be known. While the sensor positions can be determined manually, it is more convenient to do so automatically, in particular if they can change over time, e.g., if a smartphone, which is part of the network, is carried by a moving person. Automatic geometry calibration of sensors is typically realized by localizing and tracking an object and subsequently determining the position of the sensors, such that the measurements of the object's positions are most plausible. Visual calibration algorithms work on features extracted from the camera images. They can be divided into two categories \cite{Bru2011}. The first one tries to extract these features from easily recognizable objects \cite{Chen00widearea}, whereas the second group extracts features from an arbitrary scene to compare the field of view of the individual cameras \cite{Bruckner10:ASM}. For acoustic sensor nodes time of flight (ToF) based algorithms, which employ special calibration hardware and signals to achieve high positioning accuracies \cite{1369311,Crocco:etal:2012}, have been proposed. However, a tight clock synchronization between transmitter and receiver is required, whereas \cite{6637618} relaxed this limitation by estimating the differences in the sampling phase and the sensor positions jointly. If the calibration is based on time difference of arrival (TDoA) measurements, loudspeaker and microphones need no longer be synchronized, and a human speaker can be used as sound source. However, a clock synchronisation of the A/D converters of the distributed microphones is still required. Even this requirement becomes obsolete if direction of arrival (DoA) based techniques are employed \cite{JaScHa12,Jacob2013}. TDoA and DoA based calibration if carried out with artificial calibration signals with appropriate correlation properties, will typically achieve higher accuracy compared to speech signal based approaches \cite{5661986}. Calibration based on natural speech is preferable from a usability point of view, as it can be carried out in the background unnoticed by the users of the audio-visual sensor network. Most geometry calibration techniques are unable to report the sensor positions in absolute coordinates. They return their estimates in a modality specific coordinate system, resulting in an unknown rotation, translation and scaling between the coordinate axes of the acoustic and visual sensor network. The scaling ambiguity can be fixed if ToA or TDoA measurements are employed \cite{Hennecke2011-TAS,ScJaHaHeFi11}. If the calibration is based solely on DoA measurements, the scale ambiguity still remains, regardless of the modality used \cite{ScJaHaHeFi11, 5206810}. If the sensor positions of one modality are known, the displacement between the coordinate systems can be resolved by exploiting audio-visual correlates, i.e., events or objects that can be localized both acoustically and visually \cite{JaHa2014,PlingAV}. In this paper we build upon this idea and present two strategies to localize the acoustic sensors in a joint audio-visual coordinate system. Both the acoustic and visual localization is solely based on DoA measurements as they impose the least synchronisation requirements as detailed above. The first approach uses the existing acoustic sensor calibration techniques from \cite{JaScHa12}. Based on the relative geometry estimates the speaker trajectory can be recovered with the intersection based approach from \cite{479481}, while simultaneously the speaker trajectory is estimated by the visual sensor network. By computing the optimal mapping between the acoustic and visual trajectory we are able to reveal rotation, translation and scale between both modalities. The second approach exploits the fact that the sensors of both modalities deliver DoA estimates. Thus, acoustic and visual measurements can be cast in a single system of equations to determine the acoustic sensor positions, while the known visual sensor positions serve as anchor positions to eliminate the scale ambiguity. A key component of both DoA based calibration methods is the random sample consensus (RANSAC) outlier rejection algorithm \cite{RanSaC81}, which diminishes the impact of poor DoA estimates on the localization performance. In case of the joint calibration, this scheme will not only reject acoustic DoA outliers, it will reject visual DoA outliers as well. In the next section we introduce the first approach based on a coordinate mapping, whereas \Secref{sec:calibration} describes the joint calibration approach. The performance of both algorithms is evaluated in \Secref{sec:results}, before \Secref{sec:conclusion} concludes this paper. \section{Coordinate Mapping} \label{sec:mapping} Our goal is the estimation of the coordinates of $I$ acoustic sensors, where the coordinate system is defined by the known positions of $K$ visual sensors. The location of the $k$-th visual sensor node is described in 2D by the position vector $\mathbf{c}_k$ and orientation $\gamma_k$. Now, consider a moving speaker located at position $\mathbf{e}_t$ at time $t$, who is seen by the visual sensors at DoAs $\delta_{k,t}$, $k{=}1, \dots, K$. A position estimate $\mathbf{e}_t$ is obtained from the DoAs by the intersection based technique presented in \cite{479481}. The acoustic DoA estimates $\varphi_{i,t}$, $i{=}1,\dots , I$; $t{=}1, \dots, T$, captured from the same speaker trajectory are used to determine estimates $\tilde{\mathbf{m}}_i$, $i{=}1,\cdots,I$, of the acoustic sensor positions and estimates $\Theta_i$ of the orientations, using the calibration algorithm from \cite{JaScHa12}. This algorithm can only provide a relative geometry with an unknown scale factor. Therefore, only relative speaker position estimates $\tilde{\mathbf{e}}_t$ are obtained, using the same intersection based method as above. Since the acoustic event locations $\tilde{\mathbf{e}}_t$ are described in a different coordinate system as the visual estimates $\mathbf{e}_t$, there arises the following coordinate mapping problem: \begin{align} \label{eq:coordinateTransformation} \mathbf{e}_t = s\mathbf{R}\tilde{\mathbf{e}}_t+\mathbf{d} \text{;\quad} t=1, \dots , T \text{,} \end{align} where $s$ models the unknown scale factor and $\mathbf{R}$ and $\mathbf{d}$ the rotation and translation between the acoustic and the visual coordinate system. Mapping a set of points from one coordinate system to another is known as Rigid Body Transformation (RBT). In contrast to the widespread approach from \cite{Challis} to compute the RBT parameters (scale, rotation and translation) via a Singular Value Decomposition (SVD) we suggest a computation in the Discrete Fourier transform (DFT) domain, which turned out to be computationally more efficient. Hence, we introduce a complex representation of the estimated speaker positions as $u_t{=}\tilde{e}_{1,t}{+}\mathrm{j}\tilde{e}_{2,t}$ and $v_t{=}e_{1,t}{+}\mathrm{j}e_{2,t}$ respectively, where $\tilde{\mathbf{e}}_t {=} (\tilde{e}_{1,t}, \tilde{e}_{2,t})$ and $\mathbf{e}_t {=} (e_{1,t}, e_{2,t})$ are the two-dimensional speaker positions in the acoustic and visual coordinate system, respectively. Thus, the mapping problem of \Eqref{eq:coordinateTransformation} is expressed as \begin{align} \label{eq:shapeMapping} v_t = \alpha u_t + \beta\text{;\ } \alpha,\beta\in\mathbb{C}\text{.} \end{align} The absolute value and the phase of $\alpha$ correspond to scale and orientation, while $\beta$ corresponds to the translation. Arranging all observations into vectors $\mathbf{v} {=} [v_1,\dots,v_T]^\mathrm{T}$ and $\mathbf{u} {=} [u_1,\dots,u_T]^\mathrm{T}$ the least squares estimate for the RBT parameters in the complex space is given by \begin{align} \label{eq:objComplex} \langle \alpha^\ast, \beta^\ast \rangle = \underset{\alpha,\beta}{\argmin} \left(\alpha \mathbf{u} + \beta\mathbf{1} - \mathbf{v}\right)^\mathrm{H}\left(\alpha \mathbf{u} + \beta\mathbf{1} - \mathbf{v}\right) \text{,} \end{align} where $\mathbf{1}$ denotes an $T$-element vector of ones and $(\cdot)^\mathrm{H}$ the complex conjugate transpose of a vector. Let $\mathbf{x}$ and $\mathbf{y}$ denote the DFTs of $\mathbf{u}$ and $\mathbf{v}$. The optimization problem of \Eqref{eq:objComplex} is expressed in the DFT domain as \begin{align} \langle \alpha^\ast, \beta^\ast \rangle = \underset{\alpha,\beta}{\argmin} \left(\alpha \mathbf{x} + \beta\mathbf{z} - \mathbf{y}\right)^\mathrm{H}\left(\alpha \mathbf{x} + \beta\mathbf{z} - \mathbf{y}\right) \text{,} \end{align} where $\mathbf{z} {=}\left[\begin{matrix}1,0,\dots,0\end{matrix}\right]^\text{T}$ is a vector of length $T$. Due to the orthogonality properties of the DFT the joint optimization is decoupled into two separate optimizations: \begin{align} \label{eq:objShape} \alpha^\ast &= \underset{\alpha}{\argmin} \left(\alpha\mathbf{x}_{2:T}{-}\mathbf{y}_{2:T}\right)^\mathrm{H}\left(\alpha\mathbf{x}_{2:T}{-}\mathbf{y}_{2:T}\right)\text{ and} \end{align} \vspace*{-0.8cm} \begin{align} \label{eq:obj3} \beta^\ast &= \underset{\beta}{\argmin} \left(\alpha^\ast x_{1}{+}\beta{-}y_{1}\right)^\mathrm{H}\left(\alpha^\ast x_{1}{+}\beta-y_{1}\right)\text{,} \end{align} where the first bin of the DFTs is denoted by $(\cdot)_1$ and all other bins by $(\cdot)_{2:T}$. Since \Eqref{eq:objShape} and \Eqref{eq:obj3} are general least squares problems, the solution is found to be \begin{align} \label{eq:AlphaBeta} \alpha^\ast = \mathbf{x}_{2:T}^\mathrm{H}\mathbf{y}_{2:T} / \left(\mathbf{x}_{2:T}^\mathrm{H}\mathbf{x}_{2:T}\right)\text{\ and \ } \beta^\ast = {y}_1-\alpha^\ast {x}_1\text{.} \end{align} The RBT parameters can be retrieved as follows: \begin{align} \label{eq:RBT} s = \left|\alpha\right|\text{, } \mathbf{R} = \left[\begin{smallmatrix} \Re\left\{\frac{\alpha}{s}\right\}& -\Im\left\{\frac{\alpha}{s}\right\}\\ \Im\left\{\frac{\alpha}{s}\right\} & \Re\left\{\frac{\alpha}{s}\right\} \end{smallmatrix}\right]\text{ and } \mathbf{d} = \frac{1}{N}\left[\begin{smallmatrix} \Re\left\{\beta\right\} \\\Im\left\{\beta\right\} \end{smallmatrix}\right]\text{,} \end{align} where $\Re$ and $\Im$ denote real and imaginary part, respectively. If this transformation is applied to the relative acoustic sensor position estimate $\tilde{\mathbf{m}}_i$ according to \Eqref{eq:coordinateTransformation}, the absolute acoustic sensor positions $\mathbf{m}_i$ in the visual coordinate system are obtained. To summarize, the calibration algorithm to recover the acoustic sensor positions in the visual coordinate system consists of three steps. First, run the relative acoustic calibration algorithm and estimate the speaker trajectory. At the same time, track the speaker in the visual domain. Secondly, compute the DFTs of both trajectories, evaluate \Eqref{eq:AlphaBeta} and compute the RBT parameters by \Eqref{eq:RBT}. Finally, use the RBT parameters to transform the acoustic sensor position estimates from the first step into the visual coordinate system. The DFT based RBT parameter estimation delivers the same results as the conventional SVD based technique \cite{Challis}, but our FFT based implementation is twice as fast as the SVD. \section{Joint Calibration} \label{sec:calibration} Since the acoustic and the visual sensors deliver DoA estimates, we propose to extend the calibration algorithm that was used for the acoustic sensors only in step one of the algorithm presented in the last section, to both modalities and jointly calibrate the audio-visual network. Due to the known positions of the visual sensors, the scale ambiguity vanishes. In the local coordinate system of the $i$-th acoustic sensor a DoA measurement can be modelled as a unit length vector \begin{align} \mathbf{f}_{i,t} = [\begin{matrix}\cos\left(\varphi_{i,t}\right) & \sin\left(\varphi_{i,t}\right)\end{matrix} ]^\mathrm{T} \text{,} \end{align} pointing from the sensor position to the event location. This measurement vector will be compared with a prediction vector \begin{align} \label{Eq:vp} \widehat{\mathbf{f}}_{i,t} &= \left[ \begin{matrix} \cos \left( \widehat{\varphi}_{i,t}-\Theta_i \right) \\ \sin \left( \widehat{\varphi}_{i,t}-\Theta_i \right) \end{matrix} \right] \text{,} \end{align} where $\widehat{\varphi}_{i,t} {=} \arg\left\{\mathbf{e}_{t}-\mathbf{m}_i\right\}$, see \Figref{fig:DoA}. Following our previous publication \cite{JaScHa12} this prediction can be formulated as a function of the geometry parameters as follows: \begin{align} \widehat{\mathbf{f}}_{i,t} &= \left[ \begin{matrix} \cos(\Theta_i) & \sin(\Theta_i) \\ -\sin(\Theta_i) & \cos(\Theta_i) \end{matrix} \right] \frac{\mathbf{e}_{t}-\mathbf{m}_i}{\|\mathbf{e}_{t}-\mathbf{m}_i\|} \text{.} \end{align} \begin{figure}[b] \centering \psfrag{e}{$\mathbf{e}_{t}$} \psfrag{p}{$\mathbf{m}_i$} \psfrag{ph}{$\varphi_{i,t}$} \psfrag{k}[cB][lb]{$\Theta_i$} \psfrag{a}{$\widehat{\varphi}_{i,t}$} \psfrag{xp}[c][c]{$m_{1,i}$} \psfrag{xe}[c][c]{$e_{1,t}$} \psfrag{yp}[cr][cc]{$m_{2,i}$} \psfrag{ye}[cr][cc]{$e_{2,t}$} \psfrag{x}{$x$} \psfrag{y}{$y$} \includegraphics[width=0.5\linewidth]{sensornode_doa.eps} \vspace{-0.27cm} \caption{Geometric relation between acoustic sensor and event location.} \label{fig:DoA} \end{figure} By introducing the abbreviation \begin{align} f = \sum\limits_{i=1}^I\sum\limits_{t=1}^T \|\mathbf{f}_{i,t}^\mathrm{T}\widehat{\mathbf{f}}_{i,t}\|^2 \end{align} and arranging the sensor positions, sensor orientations and events into matrices $\mathbf{M} {=} [\mathbf{m}_1, \dots, \mathbf{m}_I]$, $\mathbf{\Theta} {=} [\Theta_1,\dots, \Theta_I]$ and $\mathbf{E} {=} [\mathbf{e}_1,\dots, \mathbf{e}_T]$ respectively, the geometry can be recovered by \begin{align} \label{eq:OptCalib} \langle \mathbf{M}^*, \mathbf{\Theta^*},\mathbf{E}^* \rangle = \underset{\mathbf{M},\mathbf{\Theta},\mathbf{E}}{\argmax}\left\{f\right\} \text{.} \end{align} The maximization problem of \Eqref{eq:OptCalib} can easily be transformed into a root-finding problem, since $\mathbf{f}_{i,t}$ and $\widehat{\mathbf{f}}_{i,t}$ are unit length vectors. Subsequently, the minimization is carried out by Newton's method. The formulation for the estimated and predicted DoA vectors hold for the visual sensors, too. Thus we define \begin{align} \mathbf{g}_{k,t} &= [\begin{matrix}\cos\left(\delta_{k,t}\right) & \sin\left(\delta_{k,t}\right)\end{matrix} ]^\mathrm{T} \text{ and } \\ \widehat{\mathbf{g}}_{k,t} &= \left[ \begin{matrix} \cos(\gamma_k) & \sin(\gamma_k) \\ -\sin(\gamma_k) & \cos(\gamma_k) \end{matrix} \right] \frac{\mathbf{e}_{t}-\mathbf{c}_k}{\|\mathbf{e}_{t}-\mathbf{c}_k\|} \text{,} \end{align} with the only difference that the visual sensor positions $\mathbf{c}_k$ and the corresponding orientations $\gamma_k$ are known. Hence, the visual DoA measurements form additional constraints for the optimization of \Eqref{eq:OptCalib}, and we incorporate them to obtain a formulation which allows a joint audio-visual calibration: \begin{align} \label{eq:OptCalibNew} \langle \mathbf{M}^*, \mathbf{\Theta^*},\mathbf{E}^* \rangle {=} \underset{\mathbf{M},\mathbf{\Theta},\mathbf{E}}{\argmax}\left\{f + \sum\limits_{k=1}^K \sum\limits_{t=1}^T \|\mathbf{g}_{k,t}^\mathrm{T}\widehat{\mathbf{g}}_{k,t}\|^2 \right\} \text{.} \end{align} The optimization of \Eqref{eq:OptCalibNew} is again turned into a root-finding problem in order to apply Newton's method, where the visual measurements provide the required constraints to obtain an absolute sensor position estimate in the coordinate system defined by the visual sensors. In the noise free case, with perfect DoA measurements, the sensor positions and orientations can perfectly be recovered, but imperfect acoustic or visual DoA estimates caused by reverberation or false detections can prevent a successful optimization. Our earlier investigations presented in \cite{JaScHa12} showed, that this issue can successfully be addressed by the RANSAC \cite{RanSaC81}. Since the application of the RANSAC is straightforward we highlight only the relevant parts. The procedure can be summarized as follows: \begin{compactenum} \item Randomly select the minimal number of observations necessary to solve \Eqref{eq:OptCalibNew}, e.g. $T>\frac{3I}{I+K-2}$. \item Determine sensor positions and orientations based on the selected observations by solving \Eqref{eq:OptCalibNew}. \item Compute the intersection of all DoA axes for each event. The hypothesized event location is the mean of all intersections. A DoA measured by a sensor becomes part of the candidate set $\mathcal{C}$, if the average distance of all its intersection points to the hypothesized event location is smaller than a threshold. \item If the number of elements in $\mathcal{C}$ is larger than the consensus set, estimate the sensor positions and orientations based on $\mathcal{C}$. It becomes the new consensus if its error is smaller than the error of the current consensus. \item If the number of elements in $\mathcal{C}$ is smaller than consensus set, choose a new initial set or stop the algorithm as soon as the maximum number of iterations is reached. \end{compactenum} As a modification of this standard approach, we used the updated consensus set of step 4 as the input for the second step. \section{Simulation results} \label{sec:results} In order to evaluate the performance of both calibration strategies we used the following simulation framework. We simulated $3$ random speaker trajectories, where the speaker stops at approximately $140$ positions for $\SI{5}{seconds}$ before he moves on. The sensors are located in a room of size $\SI{6.2}{m} \times \SI{7.2}{m}$. $4$ simulated cameras and $4$ simulated five-element circular microphone arrays (radius $\SI{5}{cm}$) are located sufficiently far apart from the walls, where the cameras were oriented towards the center of the room. The microphone signals are generated by the Image Method \cite{Imag}, for reverberation times from $\SI{0}{ms}$ up to $\SI{500}{ms}$. Acoustic DoA estimates are obtained by correlating the filter impulse responses of a filter-and-sum beamformer, which continuously adapts to the moving source \cite{War05}. Rather than working on a true camera signal, visual DoA estimates are simulated as follows. We employ Hidden Markov Models (HMMs) to describe the errors in the DoA estimation. A limited field of view of the camera is taken into account by dropping all angles outside a window of $\pm\SI{30}{\degree}$ relative to the camera orientation. This effect is modelled by two separate HMMs. The first HMM is for the case that a speaker is inside the visible region of the camera. Here we distinguish the states 'detection', 'missed detection' and 'false detection'. The second HMM models the case that no speaker is inside the visible region. It incorporates the states 'false detection' and 'no detection'. The transition probabilities of these models and the variance of the error distribution have been learned by computing histograms of oriented gradients (HOG) and applying a support vector machine (SVM) to identify the head and shoulder region of the speaker on the AV16.3 audio-visual corpus \cite{lathoud04c}, using the annotated sequences \textit{seq01-1p-0000} and \textit{seq15-1p-0100}. In order to perform a fair comparison between the approaches presented in \Secref{sec:mapping} and \Secref{sec:calibration} the estimation of the RBT parameters is embedded into a RANSAC framework, too, since we have shown in \cite{JaHa2014} that the RANSAC can boost the performance of the estimation of the RBT parameters. Since the RANSAC is a random process, we average over multiple runs. A sensor configuration is characterized by the positions and orientations. \Figref{fig:err} compares the mean positioning error (MPE) of the coordinate mapping based calibration (RBT) and the joint calibration strategy (Joint). It can be observed that the joint calibration clearly outperforms the RBT approach, in particular at low reverberation times. Obviously, it is advantageous to avoid premature decisions on acoustic source and sensor positions until the visual information is accounted for, as it is done in the joint calibration approach. \begin{figure} \begin{center} \setlength{\fheight}{3.2cm} \setlength{\fwidth}{0.78\linewidth} \input{Err.tikz} \end{center} \vspace*{-0.8cm} \caption{Comparison of mean positioning error (MPE) for joint audio-visual calibration (joint), calibration by coordinate mapping (RBT) and coordinate mapping with an oracle information (RBT + oracle).} \label{fig:err} \vspace*{-0.15cm} \end{figure} The coordinate mapping approach has limited capabilities to determine a precise scale factor, and errors in the scale factor dominate its performance. In order to isolate scale factor estimation errors from orientation and translation errors we performed an oracle experiment, where the scaling is assumed to be known. Indeed, the performance is now similar to that of the joint approach for low reverberation times and superior in a highly reverberant environment. The sensor orientation error of both approaches is approximately the same and smaller than $\SI{2}{\degree}$ for all reverberation times. To achieve precise calibration results a suitable spatial event configuration is more important than the total number of available events. Thus, we selected $15$ events with an appropriate configuration of one exemplar trajectory and perform a joint calibration. The results of \Tabref{tab:c1} show that a similar performance as in the previous experiment, which used the complete trajectory, is possible. \begin{table}[h] \centering \begin{tabular}{l|c|c|c|c|c|c} $T_{60}$ / ms & 0 & 100 & 200 & 300 & 400 & 500\\\hline MPE / $\mathrm{m}$ & 0.01 & 0.02 & 0.06 & 0.14 & 0.13 & 0.23 \end{tabular} \caption{Joint calibration using 15 events with appropriate spatial configuration.} \label{tab:c1} \vspace*{-0.05cm} \end{table} \section{Conclusions} \label{sec:conclusion} We have described two different strategies to obtain an absolute calibration of an acoustic sensor network if it is combined with a visual sensor network, whose sensor positions are known. By using one of the two strategies, the scaling problem identified in earlier publications \cite{Bruckner10:ASM, JaScHa12, ScJaHaHeFi11} can be solved. The first approach, which relies on the mapping of an acoustic to a visual speaker trajectory, works with arbitrary acoustic calibration strategies and is therefore very flexible. However, the performance is limited due to the scale estimation errors. The second approach, which is based on the solution of a system of nonlinear equations employing acoustic and visual DoA measurements, is computationally more complex. It outperformed the first approach for all reverberation times and delivered a calibration error smaller than $\SI{0.20}{m}$ and $\SI{2}{\degree}$ even in reverberant environments. \vfill\pagebreak \bibliographystyle{IEEEbib}
{ "timestamp": "2015-04-14T02:13:29", "yymm": "1504", "arxiv_id": "1504.03128", "language": "en", "url": "https://arxiv.org/abs/1504.03128" }
\section{Introduction}\label{s:intro} In this paper we investigate 3--braid knots of finite concordance order. We work in the smooth category, therefore the words `slice' and `concordance' will always mean, respectively, `smoothly slice' and `smooth concordance'. The {\em reverse} of an oriented knot will be denoted $-K$, and $K^m$ will denote the {\em mirror image} of $K$. In this notation, a connected sum of the form $K\#(-K^m)$ is always a {\em slice} knot, i.e.~it bounds a properly embedded disk $D\subset B^4$. Recall that a knot is {\em ribbon} if it bounds a properly embedded disk $D\subset B^4$ such that the restriction of the radial function $B^4\to [0,1]$ has no local maxima in the interior of $D$. That each slice knot is ribbon is the content of the well--known slice--ribbon conjecture. A knot $K\subset S^3$ is {\em amphichiral} if $K^m$ is isotopic to either $K$ or $-K$. The knot $K$ is {\em chiral} if it is not amphichiral. Recall that the classical concordance group $\mathcal C$ is the set of equivalence classes $[K]$ of oriented knots $K\subset S^3$ with respect to the equivalence relation which declares $K_1$ and $K_2$ equivalent if $K_1\#(-K_2^m)$ is slice. The group operation is induced by connected sum and $0\in\mathcal C$ is the concordance class of the unknot. Several facts are known about the structure of $\mathcal C$, but its torsion subgroup is not understood, see e.g.~\cite{Li05}. We will say that a knot $K$ is of {\em finite concordance order} if $[K]\in\mathcal C$ is of finite order. In~\cite{Ba08} Baldwin obtained some information on 3--braid knots of finite concordance order by computing a certain correction term of the Heegaard Floer homology of the two--fold branched cover (see Section~\ref{s:prelim} for more details). We will use his result together with constraints obtained via Donaldson's `Theorem~A'~\cite{Do87} to establish Theorem~\ref{t:main} below, which is our main result. Our approach here is similar to the one used in~\cite{Li07-1,Li07-2}, where the concordance orders of 2--bridge knots are determined. We point out that the idea of combining information coming from Heegaard Floer correction terms with Donaldson's Theorem~A was also used in~\cite{Do15, GJ11,Gr14, Le12, Le13}. Before we can state Theorem~\ref{t:main} we need to introduce some terminology. Recall that a {\em symmetric union} knot is a special kind of ribbon knot first introduced by Kinoshita and Terasaka~\cite{KT57}. It is unknown whether every ribbon knot is a symmetric union. A braid $\beta\in B_n$ is called {\em quasi--positive} if it can written as a product of conjugates of the standard generators $\sigma_1,\ldots, \sigma_{n-1}\in B_n$, and {\em quasi--negative} if $\beta^{-1}$ is quasi--positive. A knot $K$ is {\em quasi--positive} (respectively {\em quasi--negative}) if $K$ is the closure of a quasi--positive (respectively quasi--negative) braid. We now recall the notion of `blowup' from~\cite{Li14}. Let $\mathbb N$ be the set of (positive) natural numbers, $\mathbb N_0 := \mathbb N\cup\{0\}$, and let $k\in\mathbb N$. We say that $\hat z\in\mathbb N_0^{k+1}$ is a {\em blow--up} of $z=(n_1,\ldots,n_k)\in\mathbb N_0^k$ if \[ \hat z = \begin{cases} (1,n_1+1,n_2,\ldots,n_{k-1},n_k+1),\ \text{or} \\ (n_1,\ldots,n_i+1,1,n_{i+1}+1,\ldots,n_k),\ \text{for some}\ 1\leq i < k,\ \text{or} \\ (n_1+1,n_2,\ldots,n_{k-1},n_k+1,1). \end{cases} \] There is a well--known isomorphism between the 3--braid group and the mapping class group of the one--holed torus. Let $T$ be an oriented, one--holed torus, and let $\rm Mod(T)$ be the group of isotopy classes of orientation--preserving diffeomorphisms of $T$, where isotopies are required to fix the boundary pointwise. Let $x, y \in\rm Mod(T)$ be right--handed Dehn twists along two simple closed curves in $T$ intersecting transversely once. The group $\rm Mod(T)$ is generated by $x$ and $y$ subject to the relation $xyx=yxy$, and there is an isomorphism from $\psi\thinspace\colon B_3\to\rm Mod(T)$ sending the standard generators $\sigma_1, \sigma_2$ to $x$, respectively $y$. The isomorphism $\psi$ can be realized geometrically by viewing $T$ as a two--fold branched cover over the $2$--disk with three branch points: elements of $B_3$, viewed as automorphisms of the triply--pointed disk, lift uniquely to elements of $\rm Mod(T)$. It is easy to check that an element $h\in\rm Mod(T)$ is the image under $\psi$ of a quasi--positive 3--braid if and only if $h$ can be written as a product of right--handed Dehn twists. Keeping the isomorphism $\psi$ in mind, it is easy to check that by~\cite[Theorem~2.3]{Li14}, if $(s_1,\ldots, s_N)$ is obtained from $(0,0)$ via a sequence of blowups and the string \[ (c_1,\ldots, c_N) := (x_1+2,\overbrace{2,\ldots,2}^{y_1-1},\ldots,x_t+2,\overbrace{2,\ldots,2}^{y_t-1}) \] satisfies $c_i\geq s_i$ for $i=1,\ldots, N$, then the 3--braid $(\sigma_1\sigma_2)^3\prod_{i=1}^t \sigma_1^{x_i}\sigma_2^{-y_i}$ is quasi--positive. Observe that any string obtained from $(0,0)$ via a sequence of blowups contains always at least two $1$'s and, typically, more than two $1$'s. \begin{exas}\label{ex:qp} (1) According to Knotinfo~\cite{CL15}, the knot $12_{n0721}$ is slice, chiral, and equal to the closure of the 3--braid $\alpha=\sigma_1\sigma_2^2\sigma_1^4\sigma_2^{-5}$. Applying~\cite[Proposition~2.1]{Mu74} it is easy to check that $\alpha$ is conjugate to $(\sigma_1\sigma_2)^3\sigma_1^3\sigma_2^{-7}$, whose associated string of integers is obtained by changing the two $1$'s into $2$'s in $(5,1,2,2,2,2,1)$. Moreover, $(5,1,2,2,2,2,1)$ is an iterated blowup of $(0,0)$: \[ (5,1,2,2,2,2,1)\rightarrow (4,1,2,2,2,1)\rightarrow (3,1,2,2,1)\rightarrow (2,1,2,1)\rightarrow (1,1,1)\rightarrow (0,0). \] It follows from~\cite[Theorem~2.3]{Li14} that $\alpha$ is quasi--positive. (2) As another example one may consider the knot $12_{n0708}$, which according to Knotinfo is slice, chiral and equal to the closure of the 3--braid $\beta=\sigma_1\sigma_2^{-3}\sigma_1\sigma_2^{-1}\sigma_1\sigma_2\sigma_1^{-1}\sigma_2^3$. Applying~\cite[Proposition~2.1]{Mu74} one can check that $\beta$ is conjugate to $(\sigma_1\sigma_2)^3\sigma_1\sigma_2^{-3}\sigma_1^2\sigma_2^{-4}$, corresponding to the string obtained changing the two $1$'s into $2$'s in $(3,1,2,4,1,2,2)$, which is an iterated blowup of $(0,0)$. Therefore $12_{n0708}$ is also quasi--positive. Previously, the quasi--positivity of $12_{n0708}$ and $12_{n0721}$ appear to have been unknown; compare~\cite{CL15}~\footnote{Although there exists an algorithm to establish the quasi--positivity of any 3--braid (but not the quasi--positivity of any 3--braid closure)~\cite{Or04}.}. \end{exas} Before we can state our main result we need a little more terminology. Let $P\subset\mathbb R^2$ be a regular polygon with $t\geq 2$ vertices. Let $V_P=\{v_1,\ldots, v_t\}$ and $E_P=\{e_1,\ldots, e_t\}$ be the sets of vertices and edges of $P$, indexed so that, for each $i=1,\ldots, t-1$, the edge $e_i$ is between $v_i$ and $v_{i+1}$. A~{\em labelling} of $P$ will be a pair $(X,Y)$ of maps $X\thinspace\colon V_P\to\mathbb N$ and $Y\thinspace\colon E_P\to\mathbb N$. We say that a $2t$--uple $(x_1,y_1,x_2,y_2,\ldots, x_t,y_t)\in\mathbb N^{2t}$ {\em encodes} a labelling $(X,Y)$ of $P$ if \[ (x_1,y_1,x_2,y_2,\ldots, x_t,y_t) = (X(v_1),Y(e_1),\ldots, X(v_t), Y(e_t)). \] Given a labelling $(X,Y)$ and a symmetry $\varphi\thinspace\colon P\to P$ of $P$, one can define a new labelling of $P$ by setting $(X,Y)^{\varphi}:=(X\circ\varphi_V, Y\circ\varphi_E)$, where $\varphi_V\thinspace\colon V_P\to V_P$ and $\varphi_E\thinspace\colon E_P\to E_P$ are the $1-1$ maps induced by $\varphi$. We are now ready to state our main result. \begin{thm}\label{t:main} Let $K\subset S^3$ be a $3$--braid knot of finite concordance order. Then, one of the following holds: \begin{enumerate} \item either $K$ or $K^m$ is the closure of a 3--braid $\beta$ of the form \[ \beta = (\sigma_1\sigma_2)^3\sigma_1^{x_1}\sigma_2^{-y_1}\cdots\sigma_1^{x_t}\sigma_2^{-y_t},\quad t, x_i, y_i\geq 1, \] where $\sum_i y_i = \sum_i x_i +4$ and the string of positive integers $ (x_1+2,\overbrace{2,\ldots,2}^{y_1-1},\ldots,x_t+2,\overbrace{2,\ldots,2}^{y_t-1}) $ is obtained from an iterated blowup of $(0,0)$ by replacing two $1$'s with $2$'s. Moreover, $\beta$ is quasi--positive and $K$ is ribbon. \item $K$ is a symmetric union of the form $L_a$ given in Figure~\ref{f:symmunion}, where $a\in B_3$; \begin{figure}[ht] \centering \labellist \hair 2pt \pinlabel \large\rotatebox[origin=c]{90}{$a$} at 265 168 \pinlabel \large\rotatebox[origin=c]{270}{$a^{-1}$} at 50 168 \endlabellist \centering \includegraphics[scale=0.4]{symmunion} \caption{The link $L_a$} \label{f:symmunion} \end{figure} \item $K$ is isotopic to the closure of a 3--braid $\beta$ of the form \[ \beta = \sigma_1^{x_1}\sigma_2^{-y_1}\cdots\sigma_1^{x_t}\sigma_2^{-y_t},\quad t\geq 2,\ x_1,\ldots, x_t\geq 1, \] where $\sum_i y_i = \sum_i x_i$ and there exist a regular polygon $P$ with $t$ vertices and a symmetry $\varphi\thinspace\colon P\to P$ such that $(X,Y) = (Y,X')^\varphi$, where $(X,Y)$ is the labelling encoded by $(x_1,y_1,\ldots, x_t,y_t)$ and $(Y,X')$ is the labelling encoded by $(y_1,x_2, y_2,x_3,\ldots, y_t,x_1)$. Moreover, $K$ is amphichiral. \end{enumerate} \end{thm} It is natural to wonder how the three families of knots appearing in Theorem~\ref{t:main} intersect each other. It is easy to check that a knot belonging to Family~(1) is the closure of a 3--braid $\beta$ satisfying $e(\beta)=\pm 2$, where $e\thinspace\colon B_3\to\bZ$ denotes the abelianization homomorphism (exponent sum with respect to the standard generators $\sigma_i$). On the other hand, a knot $K$ belonging to Family~(2) or Family~(3) is the closure of a 3--braid $\beta$ with $e(\beta)=0$. By the main result of~\cite{BM93}, a link which can be represented as a 3--braid closure admits a unique conjugacy class of 3--braid representatives, with the exceptions of (i) the unknot, which can be represented only by the conjugacy classes of $\sigma_1\sigma_2$, $\sigma_1\sigma_2^{-1}$ or $\sigma_1^{-1}\sigma_2^{-1}$, (ii) a type $(2,k)$ torus link with $k\neq\pm 1$ and (iii) a special class of links admitting at most two conjugacy classes of 3--braid representatives having the same exponent sum. By the results of~\cite{Mu74} none of the 3--braids giving rise to the knots of Family~(1) is conjugate to either $\sigma_1\sigma_2$ or $\sigma_1^{-1}\sigma_2^{-1}$, therefore the unknot does not belong to the first family. Moreover, a $(2,k)$--torus knot with $k\neq\pm 1$ has non--vanishing signature, while clearly knots belonging to Family~(2) have vanishing signature. By the computations of~\cite{Er99} (see the proof of Lemma~\ref{l:linking-finite-order}) knots belonging to Family~(3) also have vanishing signature. Therefore we can conclude that Family~(1) is disjoint from the union of Families~(2) and~(3). On the other hand, there are knots belonging simultaneously to Families~(2) and~(3). The easiest example is the knot $8_9$, which coincides with $L_{\sigma_1\sigma_2^{-1}\sigma_1^2}$ and therefore belongs to Family~(2), while according to Knotinfo~\cite{CL15} it is the closure of $\sigma^3\sigma_2^{-1}\sigma_1\sigma_2^{-3}$ and therefore belongs to Family~(3) as well. As a final comment we point out that both Families (1) and (2) contain chiral knots. For instance, according to Knotinfo the 3--braid slice knot $8_{20}$ is chiral and equal to the closure of a quasi--negative 3--braid, therefore it belongs to Family (1). More examples of chiral knots belonging to Family~(1) are $12_{n0708}$ and $12_{n0721}$ described in the examples before Theorem~\ref{t:main}. Similarly, the 3--braid slice knots $10_{48}$ and $12_{a1011}$ are chiral and neither quasi--positive nor quasi--negative, therefore they belong to Family (2). The following corollary follows immediately from Theorem~\ref{t:main}. \begin{cor}\label{c:main} A chiral 3--braid knot of finite concordance order is ribbon. \qed\end{cor} \begin{rmks} (1) Corollary~\ref{c:main} holds for 2--bridge knots. Indeed, by~\cite[Corollary~1.3]{Li07-2} a chiral 2--bridge knot of finite concordance order is slice, and by~\cite{Li07-1} a slice 2--bridge knot is ribbon. (2) It is not difficult to find chiral knots of finite concordance order which do not satisfy the conclusion of Corollary~\ref{c:main} (and therefore cannot be closures of 3--braids). For instance, according to Knotinfo~\cite{CL15} the chiral 4-braid knots $9_{24}$ and $9_{37}$ have concordance order $2$. (3) There are $185$ knots with crossing number at most twelve and braid index $3$. Among these, $12$ are ribbon (some of which quasi--positive or quasi--negative), $17$ are amphichiral and have concordance order $2$, while $151$ have infinite concordance order. The remaining knots $10_{91}$, $12_{a1199}$, $12_{a1222}$, $12_{a1231}$ and $12_{a1258}$ are chiral and non--slice. In view of Corollary~\ref{c:main}, the five knots above have infinite concordance order. Previously, their concordance orders appear to have been unknown; compare~\cite{CL15}. \end{rmks} Theorem~\ref{t:main} naturally leads to the following problem, to which we hope to return in the future. \begin{problem}\label{prob:family3} Determine the concordance orders of the knots in Family (3) of Theorem~\ref{t:main}. \end{problem} The paper is organized as follows. In Section~\ref{s:prelim} we collect some preliminary results on knots of finite concordance order which are 3--braid closures. The purpose of the section is to prove Proposition~\ref{p:embed}, which uses Donaldson's `Theorem A' to show that if a 3--braid knot $K$ has concordance order $k$, then the orthogonal sum of $k$ copies of a certain negative definite integral lattice embeds isometrically in the standard negative definite lattice of the same rank. In Section~\ref{s:latan} we draw the lattice--theoretical consequences of the existence of the isometric embedding given by Proposition~\ref{p:embed}. Section~\ref{s:latan} contains the bulk of the technical work. In Section~\ref{s:proof} we prove Theorem~\ref{t:main} using the results of Sections~\ref{s:prelim} and~\ref{s:latan}. \bigskip \noindent {\bf Acknowledgements:} The author wishes to warmly thank an anonymous referee for her/his extremely thorough job, which helped him to correct several mistakes and to improve the exposition. \section{Preliminaries}\label{s:prelim} The following simple lemma uses the slice--Bennequin inequality~\cite{Ru93} to establish a basic property of braid closures having finite concordance order. \begin{lemma}\label{l:linking-finite-order} Let $K\subset S^3$ be a knot of finite concordance order which is the closure of a braid $\beta\in B_n$. Then, \[1-n\leqe(\beta)\leq n-1,\] where $e\thinspace\colon B_n\to\bZ$ is the exponent sum homomorphism. \end{lemma} \begin{proof} Recall that, if a knot $N\subset S^3$ is the closure of an $n$--braid $\gamma\in B_n$, the slice Bennequin inequality~\cite{Ru93} reads \begin{equation}\label{e:sBI} 1 - g_s(N) \leq n - e(\gamma), \end{equation} where $g_s(N)$ is the slice genus of $N$. Now observe that, for each $m\geq 1$, the knot $mK:=K\#\stackrel{(m)}{\cdots}\#K$ is the closure of the $mn$--braid $\eta$ described in Figure~\ref{f:mnbraid}. \begin{figure}[h] \labellist \hair 2pt \pinlabel $\beta$ at 128 45 \pinlabel $\beta$ at 128 260 \pinlabel $\beta$ at 128 372 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 40 47 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 40 262 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 40 374 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 215 47 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 215 262 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 215 374 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 310 47 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 310 262 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 310 374 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 260 156 \pinlabel \rotatebox[origin=c]{90}{$\cdots$} at 128 156 \endlabellist \centering \includegraphics[scale=0.4]{mnbraid} \caption{The $mn$--braid $\eta$ with $\hat\eta=mK$} \label{f:mnbraid} \end{figure} If $K$ has concordance order $m$ then $g_s(mK)=0$. Since $e(\eta)=me(\beta)+m-1$, applying Inequality~\eqref{e:sBI} to $\eta$ we get the inequality $e(\beta)\leq n-1$. Notice that the mirror image of $K$ is a knot of concordance order $m$ which is the closure of a 3--braid $\bar\beta$ such that $e(\bar\beta) = -e(\beta)$. Arguing as before we have $e(\bar\beta)\leq n-1$, and the statement follows. \end{proof} In~\cite{Ba08} Baldwin combines Murasugi's normal form for 3--braid closures~\cite{Mu74} and the signature computations of~\cite{Er99} with computations of the concordance invariant $\delta$ defined by Manolescu--Owens~\cite{MO07} to establish the following proposition. Here we provide a short proof of Baldwin's result based on~\cite{Mu74, Er99} and Lemma~\ref{l:linking-finite-order}. \begin{prop}[{\cite[Prop.~8.6]{Ba08}}]\label{p:prelim} Let $K\subset S^3$ be a $3$--braid knot of finite concordance order. Then, $K$ is the closure of a 3--braid $\beta\in B_3$ of the form \begin{equation}\label{e:prelim} \beta = (\sigma_1\sigma_2)^{3d} \sigma_1^{x_1}\sigma_2^{-y_1}\cdots\sigma_1^{x_t}\sigma_2^{-y_t},\quad t, x_i, y_i\geq 1, \end{equation} where $d\in\{-1,0,1\}$ and $\sum_{i=1}^t (x_i-y_i) = -4d$. Moreover, if $d=\pm 1$ then up to replacing $K$ with $K^m$ one can take $d=1$ in Equation~\eqref{e:prelim}. \end{prop} \begin{proof} As observed in~\cite[Remark~8.4]{Ba08}, the results of~\cite{Mu74} immediately imply that a 3--braid knot of finite concordance order is either the unknot or is isotopic to the closure $\hat\beta$ of a 3--braid $\beta$ of the form: \begin{equation}\label{e:proof-braid} \beta = (\sigma_1\sigma_2\sigma_1)^{2d} \sigma_1 \sigma_2^{-a_1} \cdots \sigma_1\sigma_2^{-a_n},\quad a_i\geq 0,\ \text{some $a_j>0$}, \end{equation} where $d\in\bZ$. By~\cite{Er99} $K$ has signature \begin{equation}\label{e:signature} \sigma(K) = -n -4d +\sum_{i=1}^n a_i = 2d - e(\beta), \end{equation} where $e\thinspace\colon B_3\to\bZ$ is the exponent sum homomorphism. Since $\sigma :\mathcal C\to\bZ$ is a homomorphism, the fact that $K$ has finite concordance order implies $\sigma(K)=0$, therefore $e(\beta) = 2d$. Moreover, Lemma~\ref{l:linking-finite-order} implies $d\in\{-1,0,1\}$. Since $(\sigma_2\sigma_1\sigma_2)^2=(\sigma_1\sigma_2\sigma_1)^2=(\sigma_1\sigma_2)^3$ in $B_3$, if $d=1$ we are done. If $d=-1$ and $K$ is the closure of $(\sigma_1\sigma_2\sigma_1)^{-2} \sigma_1^{x_1}\sigma_2^{-y_1}\cdots\sigma_1^{x_t}\sigma_2^{-y_t}$ then $K^m$ is the closure of \[ (\sigma_1^{-1}\sigma_2^{-1}\sigma_1^{-1})^{-2} \sigma_1^{-x_1}\sigma_2^{y_1}\cdots\sigma_1^{-x_t}\sigma_2^{y_t} = (\sigma_1\sigma_2\sigma_1)^2 \sigma_1^{-x_1}\sigma_2^{y_1}\cdots\sigma_1^{-x_t}\sigma_2^{y_t}. \] If $f\thinspace\colon B_3\to B_3$ is the automorphism which sends $\sigma_1$ to $\sigma_2$ and $\sigma_2$ to $\sigma_1$, it is easy to check that for each $\beta\in B_3$ the closure of $\beta$ is isotopic to the closure of $f(\beta)$. Therefore, $K^m$ is also the closure of $(\sigma_2\sigma_1\sigma_2)^2 \sigma_2^{-x_1}\sigma_1^{y_1}\cdots\sigma_2^{-x_t}\sigma_1^{y_t}$, which is conjugate to \[ (\sigma_2\sigma_1\sigma_2)^2 \sigma_1^{y_1}\sigma_2^{-x_2}\cdots\sigma_1^{-x_t}\sigma_1^{y_t}\sigma_2^{-x_1} \] because the element $(\sigma_2\sigma_1\sigma_2)^2$ is central. This shows that up to replacing $K$ with $K^m$ we may assume $d\in\{0,1\}$, therefore, the braid $\beta$ of Equation~\eqref{e:proof-braid} is of the form given in Equation~\eqref{e:prelim}. \end{proof} Our next task is to show that a 2--fold cover of $S^3$ branched along a 3--braid knot of finite concordance order bounds a smooth 4--manifold with an intersection lattice $\Lambda_\Gamma$ associated with a certain weighted graph $\Gamma$. Let $\Lambda_\Gamma$ be the free abelian group generated by the vertices of the integrally weighted graph $\Gamma$ of Figure~\ref{f:graph}, where $d\in\bZ$, $t, x_i, y_i\geq 1$ and $\sum_i y_i\geq 2$. The graph $\Gamma$ naturally determines a symmetric, bilinear form $\cdot:\Lambda_\Gamma\times\Lambda_\Gamma\to\bZ$ such that, if $v, w$ are two vertices of $\Gamma$ then $v\cdot v$ equals the weight of $v$, $v\cdot w$ equals $1$ if $v, w$ are connected by an unlabelled edge, $(-1)^d$ if they are connected by the $(-1)^d$--labelled edge, and $0$ otherwise. \begin{figure}[h] \centering $ \xygraph{ !{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::} !{(0,1) }*+{\bullet}="0" !{(0,1.3)}*+{\scriptstyle -x_1-2} !{(1,1)}*+{\bullet}="1" !{(1,1.3)}*+{\scriptstyle -2} !{(2,1)}*+{\ldots}="2" !{(2,1.7)}*+{\overset{y_1 - 1}{\overbrace{\hspace{68pt}}}} !{(3,1)}*+{\bullet}="3" !{(3,1.3)}*+{\scriptstyle -2} !{(4,1)}*+{\bullet}="4" !{(4,1.3)}*+{\scriptstyle -x_2-2} !{(5,1)}*+{\bullet}="5" !{(5,1.3)}*+{\scriptstyle -2} !{(6,1)}*+{\ldots}="6" !{(6,1.7)}*+{\overset{y_2 - 1}{\overbrace{\hspace{68pt}}}} !{(7,1)}*+{\bullet}="7" !{(7,1.3)}*+{\scriptstyle -2} !{(8,1) }*+{\ldots}="8" !{(9,1) }*+{\bullet}="9" !{(9,1.3)}*+{\scriptstyle -x_t-2} !{(10,1) }*+{\bullet}="10" !{(10,1.3)}*+{\scriptstyle -2} !{(11,1) }*+{\ldots}="11" !{(11,1.7)}*+{\overset{y_t - 1}{\overbrace{\hspace{68pt}}}} !{(12,1) }*+{\bullet}="12" !{(12,1.3)}*+{\scriptstyle -2} "0"-"1" "1"-"2" "2"-"3" "3"-"4" "4"-"5" "5"-"6" "6"-"7" "7"-"8" "8"-"9" "9"-"10" "10"-"11" "11"-"12" "12"-@/^0.5cm/"0"^{(-1)^d} }$ \caption{The weighted graph $\Gamma$} \label{f:graph} \end{figure} \begin{lemma}\label{l:2foldcover} Let $K\subset S^3$ be the closure of a 3--braid $\beta\in B_3$ of the form given in Equation~\eqref{e:prelim} for some $t, x_i, y_i\geq 1$ with $\sum_i y_i\geq 2$ and $d\in\{0,1\}$. Let $Y_K$ be the 2--fold cover of $S^3$ branched along $K$. Then, there is a smooth, oriented 4--manifold $M$ whose intersection lattice $H_2(M;\bZ)/\rm Tor$ is isometric to $\Lambda_\Gamma$, with $\partial M= - Y_K$. \end{lemma} \begin{proof} We prove the statement considering separately the two cases $d=0$ and $d=1$. {\em First case: $d=0$}. Let $\beta^m$ denote the `mirror braid' obtained from $\beta$ by replacing $\sigma_i$ with $\sigma_i^{-1}$, for $i=1,2$. Consider the regular projection $P\subset\mathbb R^2$ of the closure $K^m = \widehat\beta^m$ given in Figure~\ref{f:projection}. \begin{figure}[h] \centering \labellist \hair 2pt \pinlabel $\beta^m$ at 113 78 \endlabellist \centering \includegraphics[scale=0.6]{projection} \caption{Regular projection of $\widehat\beta^m$} \label{f:projection} \end{figure} Colour the regions of $\mathbb R^2\setminus P$ alternately black and white, so that the unbounded region is white. The black regions determine a spanning surface $S$ for the knot $K^m$. Push the interior of $S$ into the 4--ball and let $M\to B^4$ be the associated 2--fold branched cover. Gordon and Litherland~\cite[Theorem~3]{GL78} give a recipe to compute the intersection form of $M$. Applying their recipe it is easy to check that the intersection lattice $H_2(M;\bZ)/\rm Tor$ is isomorphic to $\Lambda_\Gamma$. Clearly $\partial M = -Y_K$, and the claim is proved when $d=0$. {\em Second case: $d= 1$}. Let $T$ be an oriented, one--holed torus viewed as a 2--dimensional manifold with boundary and $\Diff^+(T,\partial T)$ be the group of orientation--preserving diffeomorphisms of $T$ which restrict to the identity on the boundary. Let $Y_K$ be the 2--fold cover of $S^3$ branched along $K$, which up to isotopy can be assumed to be positively transverse to the standard open book decomposition of $S^3$ with disk pages. Then, the pull--back of the standard open book on $S^3$ under the covering map $Y_K\to S^3$ has pages homeomorphic to a one--holed torus $T$ and, there are right--handed Dehn twists $\gamma_1, \gamma_2\in\Diff^+(T,\partial T)$ along two simple closed curves in $T$ intersecting transversely once, such that the monodromy $h:T\to T$ is \[ h = (\gamma_1\gamma_2)^3 \gamma_1^{x_1}\gamma_2^{-y_1}\cdots\gamma_1^{x_t}\gamma_2^{-y_t}\in\Diff^+(T,\partial T). \] In~\cite[\S 4]{Li14} it is shown, using the open book $(T,h)$, that $Y_K$ bounds a smooth, oriented, positive definite 4--manifold $N$ such that the intersection lattice of $M=-N$ is isometric to $\Lambda_\Gamma$. \end{proof} \begin{rem}[Pointed out by the referee] It is possible to give another proof for the case $d=1$ of Lemma~\ref{l:2foldcover} using the same method as in the case $d=0$ by writing the knot $K$ as the almost--alternating closure of \[ \sigma_1^{x_1}\sigma_1^{-y_1}\sigma_2^{-x_2}\cdots\sigma_1^{x_t}\sigma_1^{-y_t+1}\sigma_2^{-x_1}\sigma_1^2\sigma_2\sigma_1^2, \] using the relation $(\sigma_1\sigma_2)^3 = (\sigma_2\sigma_1^2)^2$. This also gives an alternative proof of the following Lemma~\ref{l:negdef}: alternating diagrams have definite Goeritz matrices, almost--alternating diagrams have precisely one definite Goeritz, and one can easily check that the other Goeritz of the almost-alternating $K$ is indefinite. \end{rem} \begin{lemma}\label{l:negdef} The lattice $\Lambda_\Gamma$ is negative definite for any $t, x_i, y_i,\geq 1$ with $\sum_i y_i\geq 2$. \end{lemma} \begin{proof} Choose a `circular' order on the set $V$ of vertices of $\Gamma$ and, for each $v\in V$, let $v'$ (respectively $v''$) be the vertex coming immediately before (respectively after) $v$. Let $\xi = \sum_{v\in V} a_v v \in\Lambda_\Gamma$, with $a_v\in\bZ$. Then, \[ \xi\cdot\xi = \left(\sum_{v\in V} a_v v \right)\cdot\left(\sum_{w\in V} a_w w\right) = \sum_{v\in V} \left(a_v a_{v'} v\cdot v' + a^2_v v\cdot v + a_v a_{v''} v\cdot v''\right). \] Since $\sum_{v\in V} a_v a_{v'} v\cdot v' = \sum_{v\in V} a_v a_{v''} v\cdot v''$ and $v\cdot v\leq -2$ for each $v\in\mathcal V$, we have \begin{multline*} \xi\cdot\xi = \sum_{v\in V} \left(2 a_v a_{v'} v\cdot v' + a^2_v v\cdot v\right) \leq \sum_{v\in V} \left(2 a_v a_{v'}v\cdot v' -2a^2_v\right) = \\ \sum_{v\in V} \left(2 a_v a_{v'}v\cdot v' - a^2_v - a^2_{v'}\right) = - \sum_{v\in V} (a_v-a_{v'}v\cdot v')^2\leq 0. \end{multline*} Therefore, $\xi\cdot\xi=0$ implies $a_v=a_{v'}v\cdot v'$ for each $v\in V$. In particular, $a_v^2 = a_{v'}^2$ for each $v\in\mathcal V$. If we call $a^2$ this common value of the $a^2_v$'s, we have \[ 0 = \xi\cdot\xi = \sum_{v\in V} \left(2 a_v a_{v'} v\cdot v' + a^2_v v\cdot v\right) = \sum_{v\in V} \left(2 a^2_{v'} + a^2_v v\cdot v\right) = a^2 \sum_{v\in V} (v\cdot v + 2). \] Since $\sum_{v\in V} (v\cdot v + 2) = -\sum_i x_i < 0$, this implies $a=0$, hence $a_v=0$ for each $v\in V$ and $\xi=0$. \end{proof} \begin{prop}\label{p:embed} Let $K\subset S^3$ be a knot of concordance order $k$ which is the closure of a 3--braid of the form given by Equation~\eqref{e:prelim}, with $d\in\{0,1\}$ and $\sum_{i=1}^t (x_i-y_i) = -4d$. Let $\Gamma$ denote the weighted graph of Figure~\ref{f:graph} determined by the given values of $d$, $x_i$ and $y_i$. Then, the orthogonal sum $\Lambda^k_\Gamma$ of $k$ copies of $\Lambda_\Gamma$ embeds isometrically in the standard negative definite lattice $\bZ^N$, where $N=k\sum_i y_i$. Moreover, $\Lambda^k$ has finite and odd index as a sublattice of $\bZ^N$. \end{prop} \begin{proof} Since $K=\hat\beta$ has concordance order $k$, $K\#\stackrel{(k)}{\cdots}\# K$ bounds a properly and smoothly embedded disk $D\subset B^4$. The 2--fold cover $N_D\to B^4$ branched along $D$ is a rational homology ball~\cite[Lemma~2]{CG86} and $\partial N_D = Y_K\#\stackrel{(k)}{\cdots}\# Y_K$. Let $M^k:=\natural^k M$ be the boundary--connect sum of $k$ copies of the 4--manifold $M$ of Lemma~\ref{l:2foldcover}. Then, $M^k$ has intersection lattice isomorphic to $\Lambda^k_\Gamma:= \Lambda_\Gamma\perp\stackrel{(k)}{\cdots}\perp\Lambda_\Gamma$. In view of Lemma~\ref{l:negdef} the smooth, closed, 4--manifold $X = N_D\cup M^k$ has negative definite intersection form, and by Donaldson's `Theorem A'~\cite[Theorem~1]{Do87} the intersection lattice $H_2(X;\bZ)/\rm Tor$ is isometric to the standard negative lattice $\bZ^N$, where $N=b_2(X)=b_2(M^k)=k\sum_i y_i$. This gives an isometric embedding $\Lambda^k_\Gamma\subset\bZ^N$. The fact that $\Lambda^k_\Gamma$ has finite and odd index follows, via standard arguments, from the fact that, since $K$ is a knot, the determinant of $K$ is odd. \end{proof} \section{Lattice analysis}\label{s:latan} \subsection{Circular subsets}\label{ss:prelim} The standard negative lattice $(\bZ^N,-I)$ admits a basis $\mathcal E=\{e_1,\ldots, e_N\}$ such that the intersection product between $e_i$ and $e_j$ is $e_i\cdot e_j = -\delta_{ij}$. Such a basis is unique up to permutations and sign reversals of its elements. From now on, we shall call any such basis $\mathcal E\subset\bZ^N$ a {\em canonical basis}, and denote the standard negative lattice simply $\bZ^N$. Given a subset $\mathcal V\subset\bZ^N$, we shall denote by $|\mathcal V|$ the cardinality of $\mathcal V$, and by $\Lambda_\mathcal V$ the intersection lattice consisting of the subgroup of $\bZ^N$ generated by $\mathcal V$, endowed with the restriction of the intersection form on $\bZ^N$. Elements of the set $\mathcal V$ will be called indifferently {\em elements} or {\em vectors}. We will say that a vector $u\in\bZ^N$ {\em hits} a vector $v\in\bZ^N$ if $u\cdot v\neq 0$. On a finite subset $\mathcal V\subset\bZ^N$ is defined the equivalence relation $R$ generated by the reflexive and symmetric relation given by $u\sim_Rv$ if and only if $u\cdot v\neq 0$. We call {\em connected components} of $\mathcal V$ its $R$--equivalence classes, and we call {\em connected} a subset consisting of a single $R$--equivalence class. \begin{defn}\label{d:circular} A (not necessarily connected) subset $\mathcal V\subset\bZ^N$ is a {\em circular subset} if: \begin{itemize} \item $|C|\geq 3$ for each connected component $C\subset\mathcal V$; \item $v\cdot w\in\{-1,0,1\}$ for any $v,w\in\mathcal V$ with $v\neq w$; \item for each $v\in\mathcal V$, the set $\{u\in\mathcal V\ |\ |u\cdot v|=1\}$ has two elements. \end{itemize} \end{defn} Given a circular subset $\mathcal V\subset\bZ^N$, the vector $W:=\sum_{v\in\mathcal V} v \in \Lambda_\mathcal V$ will be called the {\em Wu element} of $\mathcal V$. For any subset $\mathcal V\subset\bZ^N$, a canonical basis $\mathcal E\subset\bZ^N$ such that $\sum_{v\in\mathcal V} v = -\sum_{e\in\mathcal E} e$ will be called {\em adapted to $\mathcal V$}. \begin{lemma}\label{l:wuelement} Let $\mathcal V\subset\bZ^N$ be a circular subset such that $\Lambda_\mathcal V\subset\bZ^N$ has finite and odd index and whose Wu element $W$ satisfies $W\cdot W = -N$. Then, there is a canonical basis $\mathcal E\subset\bZ^N$ adapted to $\mathcal V$. In particular, for each $e\in\mathcal E$ we have \begin{equation}\label{e:sumofcoeff} \sum_{v\in\mathcal V} v\cdot e = 1. \end{equation} \end{lemma} \begin{proof} Since the index of $\Lambda_\mathcal V$ is finite and odd, given $v\in\bZ^N$ there exists an odd integer $d$ such that $dv\in\Lambda_\mathcal V$. It is easy to check that $W\in\Lambda_\mathcal V$ is characteristic in $\Lambda_\mathcal V$, therefore $W\cdot (dv)$ and $(dv)\cdot (dv)$ are congruent modulo $2$. But, since $d$ is odd, the first number is congruent to $W\cdot v$ and the second one to $v\cdot v$. This shows that $W$ is a characteristic vector when viewed in $\bZ^N$. In particular, given any canonical basis $\mathcal E\subset\bZ^N$ we have $W\cdot e \neq 0$ for each $e\in\mathcal E$. Since $W\cdot W= -N$, we have $W\cdot e = \pm 1$ for each $e$, and up to reversing the signs of some of the $e$'s we have $W=-\sum_{e\in\mathcal E} e$. Equation~\eqref{e:sumofcoeff} follows immediately using the definition of $W$. \end{proof} \begin{lemma}\label{l:coefficients} Let $\mathcal V\subset\bZ^N$ be a circular subset and $\mathcal E\subset\bZ^N$ a canonical basis adapted to $\mathcal V$. Then, for every $v\in\mathcal V$ and $e\in\mathcal E$ we have $v\cdot e \in\{-1,0,1,2\}$, and there is a map \[ \mathcal W:=\{v\in\mathcal V\ |\ W\cdot v = v\cdot v+2\}\to\mathcal E \] defined by sending each $v\in\mathcal W$ to the unique $e_v\in\mathcal E$ such that $v\cdot e_v\in\{-1,2\}$. \end{lemma} \begin{proof} We can write each $v\in\mathcal V$ as a linear combination \[ v = -\sum_{e\in\mathcal E} (v\cdot e) e. \] For each $v\in\mathcal V$ we have the equality \[ -v\cdot v + v\cdot W = v\cdot (-v+W) = \sum_{e\in\mathcal E} v\cdot e(v\cdot e-1). \] By the second and third conditions in Definition~\ref{d:circular}, we have \[ -2 \leq -v\cdot v + v\cdot W \leq 2 \] for each $v\in\mathcal V$. The quantity $x(x-1)$ is always nonnegative when $x\in\bZ$, so we conclude \begin{equation}\label{e:coefficients} \sum_{e\in\mathcal E} v\cdot e (v\cdot e - 1) = \begin{cases} 2\quad\text{if}\ v\in\mathcal W\\ 0\quad\text{if}\ v\not\in\mathcal W. \end{cases} \end{equation} By the second and third condition in Definition~\ref{d:circular}, Equation~\eqref{e:coefficients} implies $v\cdot e\in\{-1,0,1,2\}$ for every $e$. Moreover, if $v\in\mathcal W$ there is exactly one $e_v\in\mathcal E$ such that $v\cdot e_v\in\{-1,2\}$. This gives the statement and concludes the proof. \end{proof} \subsection{Semipositive circular subsets}\label{ss:semiposcs} A circular subset $\mathcal V\subset\bZ^{N}$ will be called {\em semipositive} if each connected component $C\subset\mathcal V$ contains a single pair of vectors $u, v\in C$ such that $u\cdot v=-1$. \begin{lemma}\label{l:semipcs-u} Let $\mathcal V\subset\bZ^{|\mathcal V|}$ be a semipositive circular subset such that $v\cdot v\leq -2$ for each $v\in\mathcal V$ and $\sum_{v\in C}(v\cdot v+2)<-1$ for each connnected component $C\subset\mathcal V$. Let $\mathcal E$ be a canonical basis adapted to $\mathcal V$. Then, for each $e\in\mathcal E$ not in the image of the map of Lemma~\ref{l:coefficients}, there exists $u\in\mathcal V$ such that $u\cdot u=-2$, $u\cdot e=1$ and $u$ is the only vector of $\mathcal V$ which hits $e$. \end{lemma} \begin{proof} Let $e\in\mathcal E$ be an element not in the image of the map of Lemma~\ref{l:coefficients}. Then, we have $v\cdot e\in\{0,1\}$ for each $v\in\mathcal V$. In view of Equation~\eqref{e:sumofcoeff} there exists a unique $u\in\mathcal V$ such that $u\cdot e=1$. The subset $\mathcal V'=\mathcal V\setminus\{u\}\cup\{u+e\}$ is semipositive circular, contained in the span of $\mathcal E\setminus\{e\}$, and satisfies $\sum_{v\in C}(v\cdot v+2)<0$ for each connected component $C\subset\mathcal V'$. If $v\cdot v\leq -2$ for each $v\in\mathcal V'$, then $\Lambda_{\mathcal V'}$ is isometric to $\Lambda_\Gamma$ for some $t, x_i, y_i\geq 1$. By Lemma~\ref{l:negdef} this would imply ${\rm rk} \,(\Lambda_{\mathcal V'}) = |\mathcal V'|$. Since $\mathcal V'$ can be viewed as a subset of $\bZ^{|\mathcal V|-1}$ and $|\mathcal V'| = |\mathcal V|$, we must have $(u+e)\cdot (u+e) = -1$, which implies $u\cdot u=-2$. \end{proof} \begin{lemma}\label{l:(-1)-contr} Let $\mathcal V\subset\bZ^N$ be a semipositive circular subset, and let $\mathcal E\subset\bZ^N$ be a canonical basis adapted to $\mathcal V$. Let $u\in\mathcal V$ with $u\cdot u=-1$ and suppose that the connected component containing $u$ has cardinality at least $4$. Let $u', u''\in\mathcal V$ be the two distinct vectors such that $|u'\cdot u|=|u''\cdot u| = 1$. Then, the subset \[ \mathcal V' = \mathcal V\setminus\{u\}\cup\{u'+(u'\cdot u) u, u'' + (u''\cdot u) u\} \] is a semipositive, circular subset contained in the span of $\mathcal E' = \mathcal E\setminus\{e\}$, where $e\in\mathcal E$ is the only element such that $e\in\{u, -u\}$, and $\mathcal E'$ is adapted to $\mathcal V'$. \end{lemma} \begin{proof} The proof is an easy exercise left to the reader. \end{proof} \begin{prop}\label{p:semipos} Let $\mathcal V\subset\bZ^N$ be a semipositive circular subset such that: \begin{itemize} \item $\Lambda_\mathcal V\subset\bZ^N$ has finite and odd index; \item its Wu element $W=\sum_{v\in\mathcal V} v$ satisfies $W\cdot W = - N$; \item $v\cdot v\leq -2$ for each $v\in\mathcal V$; \item $\sum_{v\in D}(v\cdot v+2)=4-|D|<-1$ for each connected component $D\subset\mathcal V$. \end{itemize} Let $C = \{v_1,\ldots, v_{|C|}\}\subset\mathcal V$ be any connected component of $\mathcal V$, and $(-c_1,\ldots, -c_{|C|})$ the corresponding string of self--intersections. Then, there is an iterated blowup $(s_1,\ldots, s_{|C|})$ of $(0,0)$ containing exactly two $1$'s, and such that $c_i\geq s_i$ for each $i=1,\ldots, |C|$. \end{prop} \begin{proof} By Lemma~\ref{l:wuelement} there is a canonical basis $\mathcal E\subset\bZ^N$ adapted to $\mathcal V$, and Lemma~\ref{l:coefficients} applies. We observe that the subset $\mathcal W\subset\mathcal V$ defined in Lemma~\ref{l:coefficients} does not coincide with $\mathcal V$. In fact, it is easy to check that if $k$ is the number of connected components of $\mathcal V$ then $|\mathcal W|=|\mathcal V|-2k$. Since $\Lambda_\mathcal V\subset\bZ^N$ has finite index we have $|\mathcal V|=|\mathcal E|$, therefore there are at least $2k$ distinct elements in the complement of the image of the map $\mathcal W\to\mathcal E$ of Lemma~\ref{l:coefficients}. As a result, there are at least $2k$ distinct vectors $u\in\mathcal V$ as in Lemma~\ref{l:semipcs-u}. We modify all of those vectors by replacing each $u$ with $u+e$, where $e$ is the associated vector of $\mathcal E$. The result is a new subset $\mathcal V'$ with $|\mathcal V'| = |\mathcal V|$, and at least $2k$ vectors of square $-1$. Let $\mathcal E'\subset\mathcal E$ be the subset obtained by erasing from $\mathcal E$ all the vectors $e$ corresponding to the $u$'s that were modified. Then, $\mathcal V'$ is contained in the span of $\mathcal E'$ and $\mathcal E'$ is adapted to $\mathcal V'$, i.e.~$\sum_{v\in\mathcal V'} v = -\sum_{e\in\mathcal E'} e$. Lemma~\ref{l:(-1)-contr} can then be applied several times. We apply the lemma as many times as possible, i.e.~until the resulting subset $\mathcal V''$ has no vectors of square $-1$ belonging to a connected component with at least four vectors. We claim that every connected component of $\mathcal V''$ contains three vectors. In fact, a connected component $C''\subset\mathcal V''$ with no $(-1)$--vectors would contain two vectors $v$ such that $W\cdot v=v\cdot v$, and Equations~\eqref{e:sumofcoeff} and~\eqref{e:coefficients} would imply that each such $v$ would be the unique vector of $\mathcal V''$ hitting some $e\in\mathcal E$. But we had eliminated from $\mathcal V$ all $e$'s hitting a single vector at the beginning, and the construction of $\mathcal V''$ from $\mathcal V$ implies that a vector of $\mathcal V''$ can have this property only if it has square $-1$. Therefore $C''$ must consist of three vectors, and the claim holds. Next, we claim that each connected component of $\mathcal V''$ consists of $(-1)$--vectors. To see this it suffices to show that $\sum_{v\in\mathcal V''} (v\cdot v+2) = 3k$, since each of the $k$ components can contribute at most $3$ to this quantity. By our assumptions on $\mathcal V$ we have \[ \sum_{v\in\mathcal V} (v\cdot v+2) = \sum_{C\subset\mathcal V} (4-|C|) = 4k - |\mathcal V|. \] In the first step of the above construction we turn a certain number $m\geq 2k$ of $(-2)$--vectors into $(-1)$--vectors, therefore $\sum_{v\in\mathcal V'} = 4k-|\mathcal V|+m$. Then we apply Lemma~\ref{l:(-1)-contr} $\sum_{C\subset\mathcal V} (|C|-3) = |\mathcal V|-3k$ times, each time increasing the quantity by $1$. The result is $k+m$. This number is at most $3k$ because each component can contribute at most $3$. On the other hand, $m\geq 2k$ by construction. This forces $m=2k$, and the last claim is proved. If we now look at how $\mathcal V'$ is obtained from $\mathcal V''$, we see that, in view of Equations~\eqref{e:sumofcoeff} and~\eqref{e:coefficients}, each component of $\mathcal V'$ must contain at least two $(-1)$--vectors. Since $m=2k$, each component of $\mathcal V'$ contains exactly two $(-1)$--vectors. It is now easy to check that this implies the statement. \end{proof} \begin{thm}\label{t:semipos} Let $\Gamma$ be a weighted graph as in Figure~\ref{f:graph} with $d=1$ and $\sum_i y_i = \sum_i x_i + 4\geq 6$. Let $N=k\sum_i y_i$ for some $k\geq 1$ and suppose that there is an isometric embedding $\Lambda^k_\Gamma\subset\bZ^N$ of finite and odd index. Then, the string of positive integers $ (x_1+2,\overbrace{2,\ldots,2}^{y_1-1},\ldots,x_t+2,\overbrace{2,\ldots,2}^{y_t-1}) $ is obtained from an iterated blowup of $(0,0)$ by replacing two $1$'s with $2$'s. \end{thm} \begin{proof} The lattice $\Lambda^k_\Gamma$ has a natural basis, whose elements are in 1--1 correspondence with the vertices of the disjoint union of $k$ copies of $\Gamma$, and intersect as prescribed by edges and weights. Since $d=1$, the image of such a basis is a semipositive circular subset $\mathcal V\subset\bZ^N$. The Wu element $W=\sum_{v\in\mathcal V} v$ satisfies \[ W\cdot W = k\left(\sum_i (-2-x_i) -2\sum_i (y_i-1) + 2\sum_i y_i - 4\right) = -k\sum_i y_i = -N \] and since $\Lambda_\mathcal V=\Lambda^k_\Gamma$, the embedding $\Lambda_\mathcal V\subset\bZ^N$ has finite and odd index. Moreover, each connected component $D\subset\mathcal V$ satisfies \[ \sum_{v\in D} (v\cdot v+2) = -\sum_i x_i = 4 - \sum_i y_i = 4 - |D| \leq -2. \] Therefore we can apply Proposition~\ref{p:semipos} to conclude that for any connected component $C\subset\mathcal V$ there is an iterated blowup $(s_1,\ldots, s_{|C|})$ of $(0,0)$ containing exactly two $1$'s such that the elements of the string $ (c_1,\ldots, c_{|C|}) := (x_1+2,\overbrace{2,\ldots,2}^{y_1-1},\ldots,x_t+2,\overbrace{2,\ldots,2}^{y_t-1}) $ satisfy $c_i\geq s_i$ for $i=1,\ldots, |C|$. It only remains to check that $c_i = s_i+1$ if and only if $s_i=1$. This follows immediately from the fact that \[ \sum_i c_i = -\sum_{v\in C} v\cdot v = 3|C| - 4= \sum_i s_i + 2, \] where the last equality can be easily established by induction on the number of blowups. \end{proof} \subsection{Positive circular subsets}\label{ss:poscs} A circular subset $\mathcal V\subset\bZ^{N}$ is called {\em positive} if $u\cdot v \geq 0$ for any $u,v\in\mathcal V$ with $u\neq v$. Observe that for a positive circular subset $\mathcal V$ the subset $\mathcal W\subset\mathcal V$ defined in Lemma~\ref{l:coefficients} coincides with $\mathcal V$. Moreover, two canonical bases adapted to $\mathcal V$ differ by a permutation of their elements. This easily implies that, whether the map $\mathcal W\to\mathcal E$ of Lemma~\ref{l:coefficients} is injective or not, does not depend on the choice of $\mathcal E$. We will assume these facts as understood throughout this subsection. The analysis of positive circular subsets admitting an adapted canonical basis splits naturally into two subcases, according to whether the map of Lemma~\ref{l:coefficients} is injective or not. We first deal with the simplest case, when the map is not injective. \subsection*{First subcase}\label{ss:notinjsubcase} Let $\mathcal V\subset\bZ^N$ be a positive circular subset admitting an adapted canonical basis. From now on, and until further notice, we will assume that the map of Lemma~\ref{l:coefficients} is not injective. \begin{lemma}\label{l:pcsnotinj} Let $\mathcal V\subset\bZ^N$ be a positive circular subset and $\mathcal E\subset\bZ^N$ a canonical basis adapted to $\mathcal V$. Suppose that: \begin{itemize} \item $|\mathcal V|=N$; \item the map of Lemma~\ref{l:coefficients} is not injective; \item $v\cdot v\leq -2$ for each $v\in\mathcal V$; \item $\sum_{v\in D} (v\cdot v + 2) = -|D|$ for each connected component $D\subset\mathcal V$. \end{itemize} Then, for each $e\in\mathcal E$ not in the image of the map of Lemma~\ref{l:coefficients} there exists $u\in\mathcal V$ such that $u=e_u-e$ and $\{v\in\mathcal V\ |\ v\cdot e=1\} = \{u\}$. Moreover, let $C\subset\mathcal V$ be the connected component containing $u$. Then, \begin{itemize} \item if $|C|>3$ there is $v\in C$ such that $v\cdot e_u=1$ and $v$ hits $\geq 3$ distinct vectors of $\mathcal E$; \item if $|C|=3$ we have $C = \{e_1-e_2,e_3-e_1,-2e_3-e_1\}$ for some $e_1,e_2,e_3\in\mathcal E$. \end{itemize} \end{lemma} \begin{proof} By Lemma~\ref{l:negdef} and the first assumption we have ${\rm rk} \,\Lambda_\mathcal V = |\mathcal V| = N$. Therefore, since the map of Lemma~\ref{l:coefficients} is not injective, it cannot be surjective. For each $e\in\mathcal E$ not in the image of the map we have $v\cdot e\in\{0,1\}$ for each $v\in\mathcal V$. By Equation~\eqref{e:sumofcoeff} there is a unique $u\in\mathcal V$ such that $u\cdot e=1$. We claim that the set $\{e'\in\mathcal E\ |\ u\cdot e'\neq 0\}$ contains only $e$ and $e_u$ and moreover $u\cdot e_u=-1$. Otherwise, we would have $u\cdot u<-2$, and by replacing $u$ in $\mathcal V$ with $u + (u\cdot e) e$ we would obtain a new set $\mathcal V'\subset\bZ^N$ of vectors contained in the span of $\mathcal E\setminus\{e\}$, such that $\Lambda_{\mathcal V'}$ would be isometric to $\Lambda_\Gamma$ for some $t, x_i, y_i\geq 1$. Then, by Lemma~\ref{l:negdef} we would have ${\rm rk} \,(\Lambda_{\mathcal V'}) = |\mathcal V'|$. This would contradict the fact that $|\mathcal V'|=|\mathcal V|=|\mathcal E|>|\mathcal E\setminus\{e\}|$, therefore the claim is proved, and we have $u=e_u-e$. Let $v,w\in\mathcal V$ be the two vectors satisfying $v\cdot u=w\cdot u=1$. Then, $v\cdot e_u=w\cdot e_u=1$, and we may write \begin{equation}\label{e:notinj} v = a_v e_v - e_u - \cdots, \quad w = a_w e_w - e_u - \cdots\quad \text{for some}\ a_v,a_w\in\{-2,1\}. \end{equation} Let $C\subset\mathcal V$ be the connected component containing $u$. By contradiction, suppose that $|C|>3$ and both $v$ and $w$ hit only two vectors of $\mathcal E$. Then, \[ 0 = v\cdot w = (a_v e_v - e_u)\cdot(a_w e_w - e_u) = a_v a_w (e_v\cdot e_w) - 1, \] which is impossible because $e_v\cdot e_w\in\{-1,0\}$. Therefore, up to renaming $v$ and $w$ we may assume that $v$ hits at least three distinct vectors of $\mathcal E$. If $|C|=3$ we also have $v\cdot w=1$, therefore by Equation~\eqref{e:notinj} \[ 1 = v\cdot w = (a_v e_v - e_u-\cdots)\cdot(a_w e_w - e_u-\cdots) \leq (a_v e_v - e_u)\cdot(a_w e_w - e_u) = a_v a_w (e_v\cdot e_w) - 1, \] which implies $e_v=e_w$ and $a_v a_w = -2$. Up to swapping $v$ and $w$ we may assume $a_v=1$ and $a_w = -2$. This implies $v\cdot v\leq -2$ and $w\cdot w\leq -5$, and since $-3 = \sum_{v\in C} (v\cdot v+2)$ we must have $v\cdot v=-2$ and $w\cdot w=-5$, and the statement follows immediately. \end{proof} \begin{defn}\label{d:-2-exp} A string of negative integers $(-n_1,\ldots, -n_{k+1})$ is obtained from $(-m_1,\ldots, -m_k)$ by a {\em (-2)-expansion} if $m_1=2$ and \[ (-n_1,\ldots, -n_{k+1}) = \begin{cases} (-2,-m_1,\ldots, -m_k-1)\quad\text{or}\\ (-2, -m_2-1,\ldots, -m_k, -m_1). \end{cases} \] \end{defn} \begin{prop}\label{p:pcsnotinj} Let $\mathcal V\subset\bZ^N$ be a positive circular subset such that: \begin{itemize} \item $v\cdot v\leq -2$ for each $v\in\mathcal V$; \item $\Lambda_\mathcal V\subset\bZ^N$ has finite and odd index; \item the Wu element $W=\sum_{v\in\mathcal V} v$ satisfies $W\cdot W = -N$; \item the map of Lemma~\ref{l:coefficients} is not injective; \item $\sum_{v\in D} (v\cdot v + 2) = -|D|$ for each connected component $D\subset\mathcal V$. \end{itemize} Then, there is a connected component $C\subset\mathcal V$ whose string of self--intersections is obtained, up to circular symmetry, by a sequence of (-2)-expansions from $(-2, -2, -5)$ . \end{prop} \begin{proof} Observe that $\Lambda_\mathcal V$ is isometric to an intersection lattice $\Lambda_\Gamma$ for some $t, x_i, y_i\geq 1$. By Lemma~\ref{l:negdef} and the second assumption we have ${\rm rk} \,\Lambda_\mathcal V = |\mathcal V| = N$. Therefore, since the map of Lemma~\ref{l:coefficients} is not injective, it cannot be surjective. If a connected component of $\mathcal V$ has exactly three elements the statement follows immediately from Lemma~\ref{l:pcsnotinj}. Hence, suppose that each connected component of $\mathcal V$ has at least four elements. By Lemma~\ref{l:wuelement}, the first two assumptions imply the existence of a canonical basis $\mathcal E$ adapted to $\mathcal V$. This, together with the remaining assumptions imply that Lemma~\ref{l:pcsnotinj} can be applied to $\mathcal V$. Let $u$ and $v$ be vectors as in Lemma~\ref{l:pcsnotinj} belonging to a connected component $C\subset\mathcal V$. We can change $\mathcal V$ into a new subset $\mathcal V'$ by a process we will call {\em contraction}. The subset $\mathcal V'$ is obtained by replacing $C$ with $C':=\left(C\setminus\{u,v\}\right)\cup\{v+e_u\}$, which is contained in the span of $\mathcal E' := \mathcal E\setminus\{e\}$. When regarded as a subset of $\bZ^{N-1}$, $\mathcal V'$ is still a positive circular subset, $\mathcal E'$ is adapted to $\mathcal V'$, $|\mathcal V'|=N-1$, $v'\cdot v'\leq -2$ for each $v'\in\mathcal V'$ and $\sum_{v'\in C'} (v'\cdot v' + 2) = -|C'|$. Moreover, $e_u$ does not belong to the image of the map of Lemma~\ref{l:coefficients} and $|C'|\geq 3$. If $|C'|>3$ we can apply Lemma~\ref{l:pcsnotinj} again and contract $\mathcal V'$ to a subset $\mathcal V''$, possibly modifying a connected component different from $C'$. We can keep contracting as long as all connected components of the resulting subset have cardinality greater than $3$. When one of the components reaches cardinality $3$ we can apply the last part of Lemma~\ref{l:pcsnotinj}. The statement is easily obtained combining the information from the lemma with the fact that the component is the result of a sequence of contractions. \end{proof} \subsection*{Second subcase}\label{sss:injsubcase} In this subsection we study the positive circular subsets $\mathcal V\subset\bZ^N$ with an adapted canonical basis such that the map of Lemma~\ref{l:coefficients} is injective. \begin{lemma}\label{l:pcsinj} Let $\mathcal V\subset\bZ^{|\mathcal V|}$ be a positive circular subset such that: \begin{itemize} \item $v\cdot v\leq -2$ for each $v\in\mathcal V$; \item the Wu element $W=\sum_{v\in\mathcal V} v$ satisfies $W\cdot W = -|\mathcal V|$; \item there is a canonical basis $\mathcal E\subset\bZ^{|\mathcal V|}$ adapted to $\mathcal V$ and the map of Lemma~\ref{l:coefficients} is injective. \end{itemize} Then, the following properties hold: \begin{enumerate} \item for each $v\in\mathcal V$ we have $v\cdot e_v=-1$; \item for each $e\in\mathcal E$ there exist distinct elements $u_e, v_e, w_e\in\mathcal V$ such that $u_e\cdot e=-1$ and $v_e\cdot e = w_e\cdot e = 1$. Moreover, $x\cdot e=0$ for each $x\in\mathcal V\setminus\{u_e,v_e,w_e\}$; \item for each $u\in\mathcal V$ with $u\cdot u=-2$, there exist $f,g\in\mathcal E$ and $v, w, z\in\mathcal V$ such that: \begin{itemize} \item $u=e_u - f$; \item $v\cdot u=1$, $v\cdot f = 0$ and $v = e_v - e_u - \cdots$ ; \item $w\cdot u = 1$, $w\cdot e_u=0$ and $w = f - \cdots$; \item $z\cdot u = 0$ and $z = e_z - e_u - f - g -\cdots$. \end{itemize} \end{enumerate} \end{lemma} \begin{proof} (1) By the injectivity of the map each $v\in\mathcal V$ is characterized as the unique element $u\in\mathcal V$ such that $u\cdot e_v\in\{-1,2\}$. It follows that for each $u\in\mathcal V\setminus\{v\}$ we have $u\cdot e_v\in\{0,1\}$. Clearly, Equation~\eqref{e:sumofcoeff} can be satisfied only if $v\cdot e_v=-1$. (2) Since $|\mathcal V|=|\mathcal E|$, the map $v\mapsto e_v$ is a bijection. Denote by $e\mapsto u_e$ the inverse map. By (1) $u_e\cdot e = -1$, and by Equation~\eqref{e:sumofcoeff} there exist distinct elements $u_e, v_e, w_e\in\mathcal V$ such that $u_e\cdot e=-1$ and $v_e\cdot e = w_e\cdot e = 1$, while $x\cdot e=0$ for each $x\in\mathcal V\setminus\{u_e,v_e,w_e\}$. (3) The fact that $u=e_u-f$ for some $f\in\mathcal E$ follows immediately from (1), the definition of $e_u$ and the fact that $u\cdot u = -2$. Observe that for any $v\in\mathcal V$ with $v\cdot u=1$, since $v\cdot e\in\{-1,0,1\}$ for each $e\in\mathcal E$, we have $(v\cdot e_u, v\cdot f)\in\{(1,0),(0,-1)\}$. There are exactly two elements $v, w\in\mathcal V$ such that $u\cdot v=u\cdot w=1$, and by the previous observation we have $v\cdot f, w\cdot f \in\{-1,0\}$. On the other hand, $v\cdot f=w\cdot f =-1$ would imply $e_v=f=e_w$, which is impossible because we are assuming that the map $u\mapsto e_u$ is injective. Therefore, either $(v\cdot e_u, v\cdot f) = (1,0)$ or $(w\cdot e_u, w\cdot f) = (1,0)$, and up to renaming $v$ and $w$ we may assume $(v\cdot e_u, v\cdot f) = (1,0)$. Therefore $v\cdot f = 0$ and $v = e_v - e_u - \cdots$. This forces $(w\cdot e_u, w\cdot f) = (0,-1)$, because otherwise $w\cdot e_u = 1$, and there would be $w'\in\mathcal V$ distinct from $v$ and $w$ with $e_{w'} = f$. Since $w'\cdot u=0$, this would imply $w'\cdot e_u=1$, contradicting (2) because $e_u$ would already appear in $u$, $v$ and $w$. Therefore $w\cdot e_u=0$ and $w=f-\cdots$. By (2) and the fact that $(v\cdot e_u, w\cdot e_u) = (-1,0)$, there exist $z\in\mathcal V$ such that $z\cdot e_u = -1$. Since the vectors adjacent to $u$ are $v$ and $w$, we have $z\cdot u=0$ and therefore $z\cdot f = -1$ as well. Thus, $z = e_z - e_u - f-\cdots$ and $z\cdot z\leq -3$. We claim that $z\cdot z<-3$. In fact, suppose by contradiction that $z\cdot z = -3$, so that $z = e_z - e_u - f$. Since $e_u$ and $f$ already appear three times, $e_z$ must appear in both the adjacent vectors of $z$, say $z'$ and $z''$. On the other hand, we have $1\geq w\cdot z = (f-\cdots)\cdot (e_z-e_u-f) \geq 1$, therefore $w\cdot z=1$, which implies $w\in\{z', z''\}$. But this is not possible because $e_z$ cannot appear in $w$. Therefore, there exist some $g\in\mathcal E$ with $z = e_z-e_u-f-g-\cdots$. \end{proof} \begin{defn}\label{d:-2-contr} Let $\mathcal V\subset\bZ^{|\mathcal V|}$ be a positive circular subset satisfying the hypotheses of Lemma~\ref{l:pcsinj}. If $u\in\mathcal V$ satisfies $u\cdot u=-2$ and the connected component of $u$ contains more than $3$ elements, by Lemma~\ref{l:pcsinj} there exist $f, g\in\mathcal E$ and $v, z\in\mathcal V$ such that $u=e_u-f$, $v=e_v-e_u-\cdots$ and $z = e_z - e_u - f -g-\cdots$. Then, we define $v' := u+v = e_u - f-\cdots$, $z' := z + e_u = e_z-f-g-\cdots$ and \[ \mathcal V' := \mathcal V\setminus\{u,v,z\}\cup \{v',z'\}. \] We say that the set $\mathcal V'$ is obtained from $\mathcal V$ by a {\em (-2)--contraction}. \end{defn} \begin{rmk}\label{r:-2-contr} Observe that if every connected component $C$ of the set $\mathcal V'$ in Definition~\ref{d:-2-contr} satisfies $|C|\geq 3$, then $\mathcal V'$ is circular when regarded as a subset of the intersection lattice $\bZ^{|\mathcal V'|}$ spanned by $\mathcal E' = \mathcal E\setminus\{e_u\}$, and $v\cdot v\leq -2$ for each $v\in\mathcal V'$. A simple calculation shows that the Wu element $W' = \sum_{v\in\mathcal V'} v$ satisfies $W'\cdot W' = W\cdot W + 1 = -|\mathcal V'|$ and $W'=-\sum_{e\in\mathcal E'} e$. In particular, $\mathcal E'$ is a canonical basis adapted to $\mathcal V'$. Moreover, for each $v\in\mathcal V'$ there is an $e\in\mathcal E'$ such that $v\cdot e=-1$. Therefore the map $\mathcal V'\to\mathcal E'$ defined in Lemma~\ref{l:coefficients} is surjective, hence also injective. This shows that $\mathcal V'$ satisfies all the assumptions of Lemma~\ref{l:pcsinj}. \end{rmk} If, after applying a $(-2)$--contraction to a positive circular subset $\mathcal V$ satisfying the assumptions of Lemma~\ref{l:pcsinj}, we obtain a new positive circular subset $\mathcal V'$ which still contains a (-2)--vector whose connected component has more than $3$ elements, by Remark~\ref{r:-2-contr} we can apply a (-2)--contraction again, obtaining a set $\mathcal V''$, again satisfying the assumptions of Lemma~\ref{l:pcsinj}, and so on. Furthermore, at each step a $(-2)$--vector is eliminated but, in view of Definition~\ref{d:-2-contr}, no new $(-2)$--vector is created. Clearly, after a finite number of $(-2)$--contractions we end up with a circular subset $\mathcal Z\subset\bZ^{|\mathcal Z|}$ satisfying the assumptions of Lemma~\ref{l:pcsinj} and having the property that each $(-2)$--vector of $\mathcal Z$ belongs to a connected component with exactly three elements. In order to understand the set $\mathcal V$ we need more information about the set $\mathcal Z$. In the following, we shall denote by $\mathcal F\subset\bZ^{|\mathcal Z|}$ a canonical basis adapted to $\mathcal Z$. For each $u\in\mathcal Z$, we define \[ \mathcal F_u := \{f\in\mathcal F\ |\ u\cdot f\neq 0\}\subset\mathcal F. \] Following~\cite{Li07-1}, we consider the equivalence relation on $\mathcal Z$ generated by the relation given by $u\sim v$ if and only if $\mathcal F_u\cap\mathcal F_v\neq\emptyset$ and we call {\em irreducible components} of $\mathcal Z$ the resulting equivalence classes. We shall now analyze $\mathcal Z$ considering separately the two cases $t=2$ and $t\geq 3$, where $t$ is the parameter appearing in Equation~\eqref{e:prelim} (when $t=1$ the analysis is not necessary, as will be shown in the proof of Theorem~\ref{t:pcs}). \medskip \noindent\underline{Suppose $t=2$.} By the considerations following Remark~\ref{r:-2-contr}, it is easy to see that $\mathcal Z$ consists of some number $n$ of connected components $C_1,\ldots, C_n$, where each component $C_i$ contains three vectors $u_i$, $v_i$ and $w_i$ satisfying $u_i\cdot v_i = v_i\cdot w_i = w_i\cdot u_i = 1$, $u_i\cdot u_i =-2$, $v_i\cdot v_i = -2-a_i$ and $w_i\cdot w_i = -2 - b_i$, with $a_i, b_i\geq 1$. \begin{lemma}\label{l:sumofais+bis} $\sum_{i=1}^n (a_i+b_i) = 3n$. \end{lemma} \begin{proof} We refer to the notation of Figure~\ref{f:graph}. Each connected component of the original positive circular subset $\mathcal V$ contains $y_1 + y_2 - 2 = x_1 +x_2 - 2$ vectors of square $-2$. Therefore, the number of $(-2)$--contractions applied to each connected component of of $\mathcal V$ to obtain $\mathcal Z$ is $x_1 + x_2 - 3$. Each time we apply a $(-2)$--contraction the self--intersection of some vector $w$ with $w\cdot w\leq -4$ increases by $1$. This shows that \[ \sum_{i=1}^n (a_i+b_i) = n (x_1+x_2) - n (x_1+x_2 - 3) = 3n. \] \end{proof} \begin{lemma}\label{l:ais+bis} $a_i + b_i= 3$ for each $i=1,\ldots,n$. \end{lemma} \begin{proof} In view of Lemma~\ref{l:sumofais+bis}, it suffices to show that $a_i+b_i\geq 3$ for each $i=1,\ldots, n$. We do this by showing that the case $a_i+b_i=2$ cannot occur. Arguing by contradiction, suppose that $a_i + b_i = 2$ for some $i$. Then, $a_i=b_i=1$ and, up to renaming the elements of $\mathcal F$ we must have $u_i = f_1-f_2$, $v_i = f_2-f_3-f_4$ and $w_i = f_3-f_1-f_5$ for some $f_1,\ldots, f_5\in\mathcal F$. Let $v$ be the unique vector of $\mathcal F\setminus C_i$ such that $v\cdot f_1 = 1$. Since $v\cdot u_i=0$, we must have $v = e_v -f_1-f_2-\cdots$. Since $v\cdot w_i = 0$, we have either (i) $v = f_5 - f_1 - f_2-\cdots$ or (ii) $v = e_v -f_1 - f_2 - f_3-\cdots$. In Case (ii), consider the unique vector $w\in\mathcal Z\setminus C_i$ with $e_w = f_4$. Since $w\cdot v_i=0$, $\mathcal F_w$ should contain either $f_2$ or $f_3$. But $f_2$ already appears in $u_i$, $v_i$ and $v$, and $f_3$ in $v_i$, $w_i$ and $v$. The only possibility is that $w=v$ and $v=f_4-f_1-f_2-f_3-\cdots$, but this is incompatible with $v\cdot v_i=0$, therefore Case (ii) cannot occur. In Case (i), since $v\cdot v_i=0$ we must have $v = f_5 - f_1-f_2-f_4-\cdots$. Let $w\in\mathcal Z\setminus C_i$ be the unique vector with $e_w = f_4$. Then, since $w\cdot v_i=0$ we must have $w=f_4-f_3-\cdots$ and since $w\cdot w_i = 0$ we must have $w = f_4-f_3-f_5-\cdots$. Since $v\cdot w\leq 1$, we must have $v = f_5-f_1-f_2-f_4-f_6-\cdots$ and $w=f_4-f_3-f_5-f_6-\cdots$ for some $f_6\in\mathcal F$. Now consider the new set $\mathcal Z'$ obtained from $\mathcal Z$ by eliminating the vectors $u_i$, $v_i$ and $w_i$ and replacing $v$ with $v' = v+f_1+f_2+f_4$ and $w$ with $w' = w+f_3-f_4$. By construction, the set $\mathcal Z'$ is a positive circular subset of the span of $\mathcal F\setminus\{f_1,f_2,f_3,f_4\}$, and $|\mathcal Z'|=|\mathcal Z|-3$. Moreover, each connected component of $\mathcal Z'$ contains a vector with square $\leq -3$. This contradicts Lemma~\ref{l:negdef}, showing that Case (i) cannot occur either. \end{proof} \begin{prop}\label{p:Zcaset=2} When $t=2$ the set $\mathcal Z$ has an even number of connected components. Indeed, each irreducible component of $\mathcal Z$ is the union of two connected components $C_1$ and $C_2$, with \[ C_1 = \{f_1-f_2, f_2-f_3-f_4, f_4-f_5-f_6-f_1\}\quad\text{and}\quad C_2 = \{f_3-f_4-f_6, f_6-f_5, f_5-f_1-f_2-f_3\} \] for some $f_1,\ldots, f_6\in\mathcal F$. \end{prop} \begin{proof} Let $S\subset\mathcal Z$ be an irreducible component, and let $C_1\subset S$ be one of its connected components. In view of Lemma~\ref{l:ais+bis}, up to permuting the elements of $\mathcal F$ we may assume that \[ C_1 = \{u_1=f_1-f_2, v_1=f_2-f_3-f_4, w_1=f_4-f_5-f_6-f_1\}. \] Let $w_2\in S\setminus C_1$ be the unique vector with $w_2\cdot f_1=1$. Since $w_2\cdot u_1=w_2\cdot v_1=0$, we have $w_2\cdot w_2=-4$ and either (i) $w_2 = e_{w_2} - f_1- f_2 - f_3$ or (ii) $w_2 = e_{w_2} - f_1-f_2-f_4$. Case (ii) cannot occur, because if it did $f_2$ and $f_4$ would already have been used three times each, therefore the unique vector $v_2\in S\setminus C_1$ such that $e_{v_2} = f_3$ could not satisfy $v_2\cdot v_1=0$. In Case (i), since $w_2\cdot w_1=0$, up to swapping $f_5$ and $f_6$ we have $w_2 = f_5 - f_1 - f_2 -f_3$. Let $v_2\in S\setminus C_1$ be the unique vector with $e_{v_2} = f_3$. Since $v_2\cdot v_1=0$, we have $v_2 = f_3 - f_4 - \cdots$ and since $v_2\cdot w_1=0$ we have either $v_2 = f_3-f_4-f_5-\cdots$ or $v_2 = f_3-f_4-f_6-\cdots$. The first case is not possible because it would imply $v_2\cdot w_2=2$, therefore the second case occurs and $v_2\cdot w_2=1$, which implies that $v_2$ belongs to the same connected component as $w_2$, hence $v_2 = f_3-f_4-f_6$. The third vector $u_2$ of the connected component of $S$ containing $v_2$ and $w_2$ must share a vector with both $v_2$ and $w_2$. Since $f_1,\ldots, f_4$ have already been used three times, this forces $u_2 = f_6-f_5$. Therefore, $S=C_1\cup C_2$ where $C_2 = \{u_2, v_2, w_2\}$ is a connected component of the stated form. \end{proof} \medskip \noindent\underline{Suppose $t\geq 3$.} By the considerations following Remark~\ref{r:-2-contr}, it is easy to see that $\mathcal Z$ consists of some number $n$ of connected components $C_1,\ldots, C_n$, where each component $C_i$ consists of $t$ vectors of square $\leq -3$. \begin{lemma}\label{l:zeta} Each $z\in\mathcal Z$ satisfies $z\cdot z=-3$. \end{lemma} \begin{proof} For each $z\in\mathcal Z$ we have $z=-\sum_{f\in\mathcal F} (z\cdot f) f$. By Lemma~\ref{l:pcsinj}(2) we have \[ 3|\mathcal Z| \leq -\sum_{z\in\mathcal Z} z\cdot z = \sum_{z\in\mathcal Z} z\cdot \sum_{f\in\mathcal F} (z\cdot f) f = \sum_{f\in\mathcal F} \sum_{z\in\mathcal Z} (z\cdot f)^2 = 3 |\mathcal F| = 3|\mathcal Z|. \] Therefore $3|\mathcal Z| = -\sum_{z\in\mathcal Z} z\cdot z$. Since $z\cdot z\leq -3$ for each $z\in\mathcal Z$, we must have $z\cdot z = -3$ for every $z\in\mathcal Z$. \end{proof} Observe that, in view of Lemmas~\ref{l:pcsinj} and~\ref{l:zeta}, for each $u\in\mathcal Z$ there are distinct elements $f_1,f_2,f_3\in\mathcal F$ such that $u=f_1-f_2-f_3$. In particular, $\mathcal F_u = \{f_1,f_2,f_3\}$. \begin{lemma}\label{l:intersectingelmsofZ} Let $u,v\in\mathcal Z$ with $u\cdot v=1$. Then, up to swapping $u$ and $v$, one of the following holds: \begin{enumerate} \item $|\mathcal F_u\cap\mathcal F_v|=1$ and there are five distinct elements $f_1,\ldots,f_5\in\mathcal F$ such that $u=f_1-f_2-f_3$ and $v=f_3-f_4-f_5$; \item $|\mathcal F_u\cap\mathcal F_v|=3$ and there are three distinct elements $f_1,f_2,f_3\in\mathcal F$ such that $u=f_1-f_2-f_3$, $v=f_3-f_1-f_2$. \end{enumerate} Moreover, if $u$ and $v$ belong to a connected component $C\subset\mathcal F$ with cardinality $|C|>3$ then Case (1) holds. \end{lemma} \begin{proof} Since $u\cdot f, v\cdot f\in\{-1,0,1\}$ for each $f\in\mathcal F$, $u\cdot v=1$ implies that $|\mathcal F_u\cap\mathcal F_v|$ is either $1$ or $3$. In the latter case, up to swapping $u$ and $v$ we necessarily have $u=f_1-f_2-f_3$ and $v=f_3-f_1-f_2$ for some distinct $f_1,f_2,f_3\in\mathcal F$, hence (2) holds. In the first case, up to swapping $u$ and $v$ we must have $\mathcal F_u\cap\mathcal F_v=\{f_3\}$, $u=f_1-f_2-f_3$ and $v=f_3-f_4-f_5$ for some distinct $f_1,\ldots,f_5\in\mathcal F$. Therefore, (1) holds. If $u$ and $v$ belong to a connected component $C$ with $|C|>3$, there exists an element $w\in C$ such that $w\cdot u=1$ and $w\cdot v=0$. Then, since $|\mathcal F_w\cap\mathcal F_u|$ is odd and $|\mathcal F_w\cap\mathcal F_v|$ is even, we must have $\mathcal F_u\neq\mathcal F_v$, and only Case (1) can occur. \end{proof} \begin{lemma}\label{l:ortogonalelmsofZ} Let $u,v\in\mathcal Z$ with $u\cdot v=0$ and $\mathcal F_u\cap\mathcal F_v\neq\emptyset$. Then, up to swapping $u$ and $v$ the following hold: \begin{enumerate} \item there are four distinct elements $f_1,f_2,f_3,f_4\in\mathcal F$ such that $u=f_1-f_2-f_3$ and $v=f_2-f_3-f_4$; \item let $C_u$ and $C_v$ be the connected components of $\mathcal Z$ containing $u$ and $v$, and suppose $C_u\neq C_v$. If $|C_u|=3$, then $|C_v|=3$ and one of the following holds: \begin{itemize} \item[(a)] there exist $f_1,\ldots, f_6\in\mathcal F$ such that \[ C_u=\{f_1-f_2-f_3, f_3-f_4-f_5, f_5-f_6-f_1\},\quad C_v=\{f_2-f_3-f_4, f_4-f_5-f_6, f_6-f_1-f_2\}; \] \item[(b)] there exist $f_1,\ldots, f_8\in\mathcal F$ such that \[ C_u=\{f_1-f_2-f_3, f_5-f_6-f_1, f_6-f_5-f_1\},\quad C_v=\{f_4-f_7-f_8, f_2-f_3-f_4, f_3-f_2-f_4\}. \] \end{itemize} \end{enumerate} \end{lemma} \begin{proof} (1) Clearly $\mathcal F_u\cap\mathcal F_v$ contains two elements, say $f_2$ and $f_3$, one of which, say $f_2$, must satisfy $(f_2\cdot u, f_2\cdot v)\in\{(-1,1), (1,-1)\}$. Up to swapping $u$ and $v$ we may assume that $f_2\cdot u = 1$ and $f_2\cdot v=-1$. Then, $u = f_1 - f_2 - f_3$ for some $f_1, f_3\in\mathcal F$, and necessarily $v = f_2 - f_3 - f_4$ for some $f_4\in\mathcal F$, with $f_1, f_2, f_3$ and $f_4$ pairwise distinct. (2) Let $u',u''\in C_u$ be the two elements adjacent to $u$, i.e.~such that $u'\cdot u=u''\cdot u=1$. Analogously, let $v',v''\in C_v$ be the two elements adjacent to $v$. By Lemma~\ref{l:intersectingelmsofZ} we have either $|\mathcal F_{u'}\cap\mathcal F_u|=1$ or $|\mathcal F_{u'}\cap\mathcal F_u|=3$. In the latter case we would have $u'\in\{f_3-f_1-f_2, f_2-f_1-f_3\}$, which would be incompatible with $u'\cdot v=0$, therefore the first case occurs. The only possibilities are $\mathcal F_{u'}\cap\mathcal F_u = \{f_1\}$ and $\mathcal F_{u'}\cap\mathcal F_u=\{f_3\}$, which correspond, respectively, to $u'=f_5-f_6-f_1$ and $u'=f_3-f_4-f_5$, for some $f_5, f_6\in\mathcal F$. If the first possibility is realized, then it is easy to check that either $u'' = f_3-f_4-f_5$ or $u'' = f_6-f_5-f_1$. We are going to analyze the various cases. Suppose first that $u'=f_5-f_6-f_1$ and $u'' = f_3-f_4-f_5$. by Lemma~\ref{l:pcsinj}(2) there must be some vector $w\not = u$ such that $w\cdot f_2 = 1$. Since $w\cdot u=0$, we must have $w\cdot f_1=1$, and this forces $w=f_6-f_1-f_2$. Therefore $w$ is either $v'$ or $v''$, say $v''$. Then, it is easy to check that $|\mathcal F_v\cap\mathcal F_{v'}|=3$ is impossible, therefore $|\mathcal F_v\cap\mathcal F_{v'}|=1$, which forces $v' = f_4-f_5-f_6$. Thus, $v'$ exhausts both $C_v$ and the irreducible component, and (2)(a) holds. Now suppose that $u'=f_5-f_6-f_1$ and $u'' = f_6-f_5-f_1$. Then, by Lemma~\ref{l:pcsinj}(2) there is some $w\in\mathcal Z$ such that $w\cdot f_2=1$. We must have $w\cdot u=0$, which implies $w\cdot f_1=1$ or $w\cdot f_3=-1$. But $w\cdot f_1=1$ is incompatible with $w\cdot u'=w\cdot u''=0$, therefore we must have $w\cdot f_3=-1$. Then, $w\cdot v>0$ and therefore $w = f_3-f_2-f_4\in\{v',v''\}$, so we may assume $w=v''$. If we now consider the only vector $w'\in\mathcal Z$ which, by Lemma~\ref{l:pcsinj}(2), satisfies $w'\cdot f_4=-1$ we can easily conclude that $w'=f_4-f_7-f_8=v'$ for some $f_7, f_8\in\mathcal F$, and $v'\cdot v''=1$. Therefore $|C_v=3|$ and $C_u$, $C_v$ are as in (2)(b). There remains to examine the possibility $u' = f_3-f_4-f_5$. In this case, it is easy to check that $|\mathcal F_{u''}\cap\mathcal F_u|=3$ is impossible, and $|\mathcal F_{u''}\cap\mathcal F_u|=1$ forces $u''=f_5-f_6-f_1$ for some $f_6\in\mathcal F$. As before, this implies $\{v',v''\}=\{f_4-f_5-f_6,f_6-f_1-f_2\}$. Therefore $|C_v|=3$ and (2)(a) holds. \end{proof} \begin{prop}\label{p:twoirrcompZ} Suppose that $u,v\in\mathcal Z$ belong to distinct connected components $C_u$, respectively $C_v$, and $\mathcal F_u\cap\mathcal F_v\neq\emptyset$. If $|C_u|, |C_v|>3$ then $|C_u|=|C_v|$ and, up to swapping $u$ and $v$ there are distinct elements $f_1,\ldots, f_{2|C_u|}\in\mathcal F$ such that \[ C_u = \{ f_{2i-1} - f_{2i} - f_{2i+1}\},\quad C_v = \{f_{2i} - f_{2i+1} - f_{2i+2}\} \quad\text{for $i=1,\ldots, |C_u|$,} \] where $f_{2|C_u|+1} = f_1$ and $f_{2|C_u|+2}=f_2$. \end{prop} \begin{proof} According to Lemma~\ref{l:ortogonalelmsofZ}, up to swapping $u$ and $v$ we can write $u=f_1-f_2-f_3$ and $v=f_2-f_3-f_4$ for some distinct $f_1,f_2,f_3,f_4\in\mathcal F$. Let $u',u''\in C_u$ be the two elements adjacent to $u$, i.e.~such that $u'\cdot u=u''\cdot u=1$. Analogously, let $v',v''\in C_v$ be the two elements adjacent to $v$. By Lemma~\ref{l:intersectingelmsofZ} we have \[ |\mathcal F_{u'}\cap\mathcal F_u|=|\mathcal F_{u''}\cap\mathcal F_u|=|\mathcal F_{v'}\cap\mathcal F_v|=|\mathcal F_{v''}\cap\mathcal F_v|=1. \] Moreover, we claim that either $u'\cdot f_1=0$ or $u''\cdot f_1=0$. To prove the claim, suppose by contradiction that $u'\cdot f_1\neq 0$ and $u''\cdot f_1\neq 0$. Then, $\mathcal F_{u'}\cap\mathcal F_u = \mathcal F_{u''}\cap\mathcal F_u=\{f_1\}$. By Lemma~\ref{l:pcsinj}(1) and the surjectivity of the map of Lemma~\ref{l:coefficients}, there exists $w\in\mathcal Z$ with $w\cdot f_2=-1$. This implies, in particular, that $w\not\in\{u',u''\}$, and therefore $w\cdot u=0$. By Lemma~\ref{l:ortogonalelmsofZ} we have $w=f_2-f_3-f_5$ for some $f_5\in\mathcal Z$. But then $w\cdot v<0$, which is impossible. Therefore, the claim is proved and without loss of generality we may assume $u'\cdot f_1=0$. Since $u'\cdot u=1$, $u'\cdot v=0$ and we already have $v\cdot f_2=-1$, we must have $u'=f_3-f_4-f_5$ for some $f_5\in\mathcal F$. If we apply the same argument with the pair $(v,u')$ in place of $(u,v)$ we see that we may assume without loss of generality that $v'=f_4-f_5-f_6$ for some $f_6\in\mathcal Z$. Now let $u^1:=u$, $u^2:=u'$, $v^1:=v$ and $v^2:=v'$. The same argument applied to $(v^1,u^1)$ yields an element $u^3=f_5-f_6-f_7$, and so on. We end up with two sequences of the form \[ u^i = f_{2i-1} - f_{2i} - f_{2i+1}\in C_u,\quad v^i = f_{2i} - f_{2i+1} - f_{2i+2}\in C_v,\ i=1,2,\ldots \] such that $u^i\cdot u^{i+1}=v^i\cdot v^{i+1} = 1$ for every $i$. Suppose without loss of generality $|C_u|\leq |C_v|$. It is easy to check that, as long as $i\leq |C_u|-1$ the $f_i's$ are all distinct. On the other hand, $u^{|C_u|}\cdot u^1=1$ implies $u^{|C_u|} = f_{2|C_u|-1}-f_{2|C_u|} - f_1$ and $u^{|C_u|+1}=u^1$, while $v^{|C_u|}\cdot u^1=0$ implies $v^{|C_u|} = f_{2|C_u|} - f_1 - f_2$ and $v^{|C_u|+1}=v^1$. Therefore $|C_u|=|C_v|$ and the statement holds. \end{proof} \begin{prop}\label{p:oneirredcompZ} Let $C\subset\mathcal Z$ be a connected component of $\mathcal Z$ such that, for each $u\in C$ and $v\in\mathcal Z$, $\mathcal F_u\cap\mathcal F_v\not=\emptyset$ implies $v\in C$. Then, $|C|$ is odd and there exist $f_i\in\mathcal F$, $i\in\bZ/(2m+1)\bZ$, such that \[ C = \{f_{2i-1}-f_{2i}-f_{2i+1}\ |\ i\in\bZ/(2m+1)\bZ\}. \] \end{prop} \begin{proof} Without loss of generality we may assume $C=\mathcal Z$. We prove the statement by induction on $|\mathcal Z|\geq 3$, treating separately the two cases $|\mathcal Z|$ odd and $|\mathcal Z|$ even. For the basis of the induction in the odd case, we observe that if $|\mathcal Z|=3$ it is easy to check that there exist $f_1,f_2,f_3\in\mathcal F$ such that \[ \mathcal Z = \{f_1-f_2-f_3, f_3-f_1-f_2, f_2-f_3-f_1\}, \] therefore the statement holds. In the even case the basis is the case $|\mathcal Z|=4$. Then, $|\mathcal F|=4$ as well, which is impossible by Lemma~\ref{l:intersectingelmsofZ} (Case~$(1)$ applies because $|\mathcal Z|>3$). Therefore, there is no set $\mathcal Z$ satisfying the assumptions of the proposition with $|\mathcal Z|=4$. In the proof we will show that the same conclusion holds if $|\mathcal Z|$ is even. Now suppose that $|\mathcal Z|\geq 5$, and let $u,v\in \mathcal Z$ with $u\cdot v=1$. By Lemma~\ref{l:intersectingelmsofZ}, up to renaming $u$ and $v$ there exist five distinct elements $f_1,\ldots, f_5\in\mathcal F$ such that $u=f_1-f_2-f_3$ and $v=f_3-f_4-f_5$. Let $w$ be the only element of $\mathcal Z$ such that $w\neq u$ and $w\cdot f_3=1$. By Lemma~\ref{l:intersectingelmsofZ} we must have $w\cdot u=0$. We claim that $w\cdot v=0$ as well. Suppose by contradiction that $w\cdot v=1$. Then, let $w'$ and $w''$ be the unique elements of $\mathcal Z$ such that $w'\cdot f_4=-1$ and $w''\cdot f_5=-1$. Since necessarily $w'\cdot v=w''\cdot v=0$ and $f_3$ already hits $u, v$ and $w$, we must have $w'\cdot f_5=w''\cdot f_4=1$. But this would imply $w'\cdot w''>0$ and $|\mathcal F_{w'}\cap\mathcal F_{w''}|>1$, contradicting Lemma~\ref{l:intersectingelmsofZ}. Therefore the claim is proved and $w\cdot v=0$. Up to swapping $f_4$ and $f_5$ we have $w=f_2-f_3-f_4$. Now let $w' = f_2 - f_4$, and consider the new set \[ \mathcal Z' = \mathcal Z\setminus\{u,v,w\} \cup \{u+v,w'\} \subset\mathcal Z. \] By construction, $\mathcal Z'$ has clearly cardinality $|\mathcal Z|-1$ and is contained in the span of $\mathcal F'=\mathcal F\setminus\{f_3\}$. Thus, it is a positive, circular subset in $\bZ^{|\mathcal Z'|}$ admitting a canonical adapted basis, and the map of Lemma~\ref{l:coefficients} is injective. Moreover, the Wu element $W'$ of $\mathcal Z'$ clearly satisfies $W'\cdot W'=-|\mathcal Z'|$. Hence, the assumptions of Lemma~\ref{l:pcsinj} are satisfied for $\mathcal Z'$. We can therefore apply a (-2)--contraction using the vector $w'$. Since $u+v=f_1-f_2-f_4-f_5$, the (-2)--contraction gives a new positive circular subset $\mathcal Z''$ of cardinality $|\mathcal Z''|=|\mathcal Z'|-1=|\mathcal Z|-2\geq 3$ contained in the span of $\mathcal F''=\mathcal F\setminus\{f_2,f_3\}$. By construction, all the elements of $\mathcal Z''$ have square $-3$, and $\mathcal Z''$ has a single connected component. Moreover, it is easy to check that the assumptions of Lemma~\ref{l:pcsinj} are still satisfied by $\mathcal Z''$. If $|\mathcal Z|$ and $|\mathcal Z''|$ were even, we could repeat the same construction -- several times if necessary -- to obtain a set $\widetilde\mathcal Z$ with $|\widetilde\mathcal Z|=4$, satisfying the assumptions of the proposition, which we have already shown to be impossible. Therefore $|\mathcal Z|$ and $|\mathcal Z''|$ must be odd. By the inductive assumption there exist $f_i\in\mathcal F''$, $i\in\bZ/(2m-1)\bZ$, such that \[ \mathcal Z'' = \{f_{2i-1}-f_{2i}-f_{2i+1}\ |\ i\in\bZ/(2m-1)\bZ\}. \] Observe that $\mathcal Z$ is obtained from $\mathcal Z''$ by replacing, for some $j\in\bZ/(2m-1)\bZ$, the subset \[ \{f_{2j-1}-f_{2j}-f_{2j+1}, f_{2j+1}-f_{2j+2}-f_{2j+3}\} \] with \[ \{f_{2j-1}-f_{2j}-f_{2m+3}, f_{2m+2}-f_{2m+3}-f_{2j+1}, f_{2j+1}-f_{2j+2}-f_{2j+3}\} \] and the subset \[ \{f_{2j}-f_{2j+1}-f_{2j+2}\}\quad\text{with}\quad \{f_{2j}-f_{2m+2}-f_{2m+3}, f_{2m+3}-f_{2j+1}-f_{2j+2}\}. \] It is now a simple matter to deduce the statement for $\mathcal Z$. \end{proof} \subsection*{Conclusions} We now have enough information about positive circular subsets of $\bZ^N$, so we can draw the conclusions we are interested in. The following theorem will be used in the next section, together with Theorem~\ref{t:semipos}, to prove Theorem~\ref{t:main}. \begin{thm}\label{t:pcs} Let $\Gamma$ be a weighted graph as in Figure~\ref{f:graph} with $d=0$, $t\geq 2$, $x_i, y_i\geq 1$ and $\sum_i y_i = \sum_i x_i\geq 3$. Let $N=k\sum_i y_i$ for some $k\geq 1$, and suppose that there is an isometric embedding $\Lambda^k_\Gamma\subset\bZ^N$ of finite and odd index. Then, the string of negative integers \begin{equation}\label{e:string} S:=(-2-x_1,\overbrace{-2,\ldots,-2}^{y_1-1},\ldots,-2-x_t,\overbrace{-2,\ldots,-2}^{y_t-1}) \end{equation} satisfies one of the following: \begin{enumerate} \item $S$ is obtained, up to circular symmetry, from $(-2,-2,-5)$ by iterated (-2)--expansions (in the sense of Definition~\ref{d:-2-exp}); \item there exist a regular polygon $P$ with $t$ vertices and a symmetry $\varphi\thinspace\colon P\to P$ such that $(X,Y) = (Y,X')^\varphi$, where $(X,Y)$ is the labelling of $P$ encoded by $(x_1,y_1,\ldots, x_t,y_t)$ and $(Y,X')$ is the labelling encoded by $(y_1,x_2, y_2,x_3,\ldots, y_t,x_1)$. \end{enumerate} \end{thm} \begin{proof} We proceed as in Theorem~\ref{t:semipos}. The lattice $\Lambda^k_\Gamma$ has a natural basis, whose elements are in 1--1 correspondence with the vertices of the disjoint union of $k$ copies of $\Gamma$, and intersect as prescribed by edges and weights. Since $d=0$, the image of such a basis is a positive circular subset $\mathcal V\subset\bZ^N$. The Wu element $W=\sum_{v\in\mathcal V} v$ satisfies \[ W\cdot W = k\left(\sum_i (-2-x_i) -2\sum_i (y_i-1) + 2\sum_i y_i\right) = -k\sum_i y_i = -N \] and since $\Lambda_\mathcal V=\Lambda^k_\Gamma$, the embedding $\Lambda_\mathcal V\subset\bZ^N$ has finite and odd index. By Lemma~\ref{l:wuelement}, there is a canonical basis $\mathcal E\subset\bZ^N$ adapted to $\mathcal V$. Moreover, each connected component $D\subset\mathcal V$ satisfies \[ \sum_{v\in D} (v\cdot v+2) = -\sum_i x_i = -\sum_i y_i = -|D|. \] If the map of Lemma~\ref{l:coefficients} is not injective then the assumptions of Proposition~\ref{p:pcsnotinj} are satisfied. Applying the proposition, Case~$(1)$ of the statement follows immediately. If the map of Lemma~\ref{l:coefficients} is injective, the considerations following Remark~\ref{r:-2-contr} show that applying a finite number of (-2)--contractions (in the sense of Definition~\ref{d:-2-contr}) to the subset $\mathcal V$ one obtains a circular subset $\mathcal Z\subset\bZ^{|\mathcal Z|}$ satisfying all the assumptions of Lemma~\ref{l:pcsinj} and having the property that each $(-2)$--vector of $\mathcal Z$ belongs to a connected component with exactly three elements. In particular, there is a canonical basis $\mathcal F\subset\bZ^{|\mathcal Z|}$ adapted to $\mathcal Z$. Clearly, every irreducible component of $\mathcal Z$ is a union of connected components. Since the string $S$ given by~\eqref{e:string} is associated to any connected component of $\mathcal V$, we can assume without loss of generality that $\mathcal Z$ is irreducible. When $t=2$ the subset $\mathcal Z$ satisfies the conclusions of Proposition~\ref{p:Zcaset=2}, therefore it is the union of two connected components $C_1 = \{u_1,v_1,w_1\}$ and $C_2 = \{u_2,v_2,w_2\}$, where $u_1 = f_1-f_2$, $v_1 = f_2-f_3-f_4$, $w_1 = f_4-f_5-f_6-f_1$ and $u_2 = f_6-f_5$, $v_2 = f_3-f_4-f_6$ and $w_2 = f_5-f_1-f_2-f_3$ for some $f_1,\ldots, f_6\in\mathcal F$. Now we want to recover the self--intersections of the elements of $\mathcal V$ using the fact that $\mathcal Z$ is obtained from $\mathcal V$ by $(-2)$--contractions. Of course, $\mathcal V$ is obtained from $\mathcal Z$ by $(-2)$--expansions. Suppose that, for $i=1,2$, $\mathcal V$ is obtained by applying $a_i$ $(-2)$--expansions between $u_i$ and $v_i$, $b_i$ $(-2)$--expansions between $v_i$ and $w_i$, and $c_i$ $(-2)$--expansions between $w_i$ and $u_i$. Then, it is easy to check that $v_i\in\mathcal Z$ becomes a vector of $\mathcal V$ of self--intersection $-3-b_{3-i}$ and $w_i$ a vector of self--intersection $-4-a_{3-i}-c_{3-i}$, for $i=1,2$. Therefore, up to a symmetry, the weights of the weighted graphs associated to the two connected components of $\mathcal V$ are \[ (-4-a_2-c_2,\overbrace{-2,\ldots, -2}^{a_1+c_1+1}, -3-b_2,\overbrace{-2,\ldots,-2}^{b_1}) \] and \[ (-4-a_1-c_1,\overbrace{-2,\ldots, -2}^{a_2+c_2+1}, -3-b_1,\overbrace{-2,\ldots,-2}^{b_2}). \] Since the two components of $\mathcal V$ had the same self--intersections, in terms of the parameters $x_i, y_i$ this easily translates into the condition that either (i) $x_1=y_1$ and $x_2=y_2$ or (ii) $x_1=y_2$ and $x_2=y_1$. In both cases, there exist a bigon $P$ and a symmetry $\varphi\thinspace\colon P\to P$ such that $(X,Y) = (Y,X')^\varphi$, where $(X,Y)$ is the labelling of $P$ encoded by $(x_1,y_1, x_2, y_2)$ and $(Y,X')$ is the labelling encoded by $(y_1,x_2, y_2,x_1)$. In fact, in the first case we can take the reflection of $P$ which fixes the vertices and switches the edges, in the second case the reflection which switches the vertices and fixes the edges. Therefore Case~(2) holds. When $t\geq 3$ we first assume that $\mathcal Z$ has more than one connected component. Then, by Lemma~\ref{l:ortogonalelmsofZ} and Proposition~\ref{p:twoirrcompZ} all the connected components of $\mathcal Z$ have the same cardinality. As in the case $t=2$, we want to recover the self--intersections of the elements of $\mathcal V$ from the structure of $\mathcal Z$. Let $C\subset\mathcal Z$ be a connected component. If $|C|>3$ then Proposition~\ref{p:twoirrcompZ} applies, and we shall now refer to the notation and terminology of that proposition. Starting from $\mathcal Z$, if we do, say, $x_i$ (-2)--expansions between $f_{2i-1}-f_{2i}-f_{2i+1}$ and $f_{2i+1}-f_{2i+2}-f_{2i+3}$, the weight of the vertex corresponding to $f_{2i}-f_{2i+1}-f_{2i+2}$ becomes $-3-x_i$. Similarly, doing $y_i$ (-2)--expansions between $f_{2i}-f_{2i+1}-f_{2i+2}$ and $f_{2i+2}-f_{2i+3}-f_{2i+4}$ makes the vertex corresponding to $f_{2i-1}-f_{2i}-f_{2i+1}$ acquire weight $-3-y_i$. This shows that the two connected components $C_u$ and $C_v$ of Proposition~\ref{p:twoirrcompZ} come from two connected components $\widetilde C_u, \widetilde C_v\subset\mathcal V$ having self--intersections given, up to a symmetry of the corresponding weighted graphs, by \[ (\overbrace{2,\ldots,2}^{x_1},-3-y_1,\overbrace{2,\ldots,2}^{x_2}, \ldots,\overbrace{2,\ldots,2}^{x_t},-3-y_t)\quad\text{and}\quad (-3-x_1,\overbrace{2,\ldots,2}^{y_1},-3-x_2, \ldots,-3-x_t,\overbrace{2,\ldots,2}^{y_t}). \] Since the weighted graphs associated to $\widetilde C_u$ and $\widetilde C_v$ are isomorphic, it follows easily that there is a regular $t$--gon $P$ and a symmetry $\varphi\thinspace\colon P\to P$ such that $(X,Y) = (Y,X')^\varphi$, where $(X,Y)$ is the labelling of $P$ encoded by $(x_1,y_1,\ldots, x_t,y_t)$ and $(Y,X')$ is the labelling encoded by $(y_1,x_2, y_2,x_3,\ldots, y_t,x_1)$. Therefore Case~(2) holds. If $|C|=3$ then Lemma~\ref{l:ortogonalelmsofZ}(2) applies. Conclusion (a) in the statement of the lemma is perfectly analogous to the statement of Proposition~\ref{p:twoirrcompZ}. Therefore, if Case~(a) in the lemma holds, Case~(2) of the statement holds. If the conclusion of Lemma~\ref{l:ortogonalelmsofZ}(2)(b) holds, an analysis similar to the case $|C|>3$ shows that the connected component $C_u$ corresponds to a connected component $\widetilde C_u\subset\mathcal V$ with string of self--intersections given, up to a symmetry of the corresponding weighted graph, by \[ (-3-x_1,\overbrace{2,\ldots,2}^{x_3},-3-x_2,\overbrace{2,\ldots,2}^{x_1}, -3-x_3,\overbrace{2,\ldots,2}^{x_2}). \] In other words, up to symmetry, the string is of the type given by Equation~\eqref{e:string} with $t=3$ and $(y_1,y_2,y_3)=(x_3,x_1,x_2)$. In other words, there is an equilateral triangle $P$ and a $2\pi/3$--rotation $\varphi\thinspace\colon P\to P$ such that $(x_1,y_1,x_2,y_2,x_3,y_3) = (y_1,x_2,y_2,x_3,y_3,x_1)^\varphi$. Therefore Case~(2) holds again. Now we assume that $\mathcal Z$ is connected. Then, Proposition~\ref{p:oneirredcompZ} applies. This case is quite similar to the case of Proposition~\ref{p:twoirrcompZ}. For each index $i$, we can only do $x_i$ (-2)--expansions between $f_{2i-1}-f_{2i}-f_{2i+1}$ and $f_{2i+1}-f_{2i+2}-f_{2i+3}$. Then, the vertex corresponding to $f_{2i}-f_{2i+1}-f_{2i+2}$ acquires weight $-3-x_i$. This implies that the weights of each component of $\mathcal V$ are given by Equation~\eqref{e:string}, with $t=2m+1$ and $y_i = x_{i+m+1}$ for each $i\in\bZ/(2m+1)\bZ$. Arguing as when $\mathcal Z$ is disconnected, we see again that Case~(2) holds. This concludes the proof. \end{proof} \section{The proof of Theorem~\ref{t:main}}\label{s:proof} In this section we use the results of Sections~\ref{s:prelim} and~\ref{s:latan} to prove Theorem~\ref{t:main}. By Proposition~\ref{p:prelim}, $K$ is the closure of a 3--braid $\beta\in B_3$ of the form \begin{equation*} \beta = (\sigma_1\sigma_2)^{3d} \sigma_1^{x_1}\sigma_2^{-y_1}\cdots\sigma_1^{x_t}\sigma_2^{-y_t},\quad t, x_i, y_i\geq 1, \end{equation*} for some $d\in\{-1,0,1\}$ with $\sum_{i=1}^t (x_i-y_i) = -4d$. We consider separately the two cases $d=\pm 1$ and $d=0$. {\em First case: $d=\pm 1$}. If $d=-1$ then $K^m$ is the closure of an analogous 3--braid with $d=1$. Therefore, we will assume without loss that $d=1$. By Proposition~\ref{p:embed}, for some $k\geq 1$ the orthogonal sum $\Lambda^k_\Gamma$ of $k$ copies of $\Lambda_\Gamma$ embeds isometrically, with finite and odd index, in the standard negative definite lattice $\bZ^N$, where $N=k\sum_i y_i$. If $\sum_i x_i = 1$ then $t=1$, $x_1=1$ and $y_1=x_1+4=5$. Therefore $\beta = (\sigma_1\sigma_2)^3\sigma_1\sigma_2^{-5}$, which belongs to Family~$(1)$ of Theorem~\ref{t:main}. Indeed, $(x_1+2, \overbrace{2,\ldots,2}^{y_1-1}) = (3,2,2,2,2)$ and it is easy to check that $(3,1,2,2,1)$ is an iterated blowup of $(0,0)$ and $\beta$ is quasi--positive because it is conjugate to $\sigma_2^3\sigma_1\sigma_2^{-3}\sigma_1$, or by~\cite[Theorem~2.3]{Li14}. We have $e(\beta)=2$, and we show below that a knot with vanishing signature which is the closure of a quasi--positive 3--braid having exponent sum equal to $2$ is ribbon. Thus, we may assume $\sum_i y_i = \sum_i x_i + 4 \geq 6$. Theorem~\ref{t:semipos} implies that $K$ belongs to Family~$(1)$. To conclude the proof in this case we need to argue that $\beta$ is quasi-positive and $K$ is ribbon. As observed in Section~\ref{s:intro}, the fact that $\beta$ is quasi-positive follows from~\cite[Theorem~2.3]{Li14}. Next, we observe that since $K$ has finite concordance order its signature vanishes, therefore by Equation~\eqref{e:signature} we have $e(\beta) = 2d = 2$, where $e\thinspace\colon B_3\to\bZ$ is the exponent sum homomorphism. Finally, the constructions described in~\cite[\S~2]{Ru83} show that the closure of a quasi-positive 3--braid with exponent sum equal to $2$ bounds an immersed surface $S\subset S^3$ with only ribbon singularities, constructed as the disjoint union of three embedded disks and two embedded 1--handles which intersect the disks in ribbon singularities. Since $K=\hat\beta = \partial S$ is a knot, the fact that $\chi(S) = 3 - 2 = 1$ implies that in our case we can find such an $S$ which an immersed ribbon disk. It is well--known that suitably pushing the interior of $S$ into the 4--ball we obtain a ribbon disk for $K$. {\em Second case: $d=0$}. Let $n:=\sum_i x_i=\sum_i y_i$. If $n=1$ we have $t=x_1=y_1=1$, therefore $\beta=\sigma_1\sigma_2^{-1}$, $K=\hat\beta$ belongs to Family~(3) and it is the unknot, which is amphichiral. If $n=2$ there are two possible cases: either $t=2$ and $x_1=y_1=x_2=y_2=1$, or $t=1$ and $x_1=y_1=2$. In the first case $\beta=(\sigma_1\sigma_2^{-1})^2$, $K$ belongs to Family~(3) and it is the figure--eight knot, which is well--known to be amphichiral. In the second case $\beta =\sigma_1^2\sigma_2^{-2}$, and $K=\hat\beta$ is a 3--component link. Therefore, from now on we assume $n=\sum_i x_i=\sum_i y_i\geq 3$. By Proposition~\ref{p:embed}, for some $k\geq 1$ the orthogonal sum $\Lambda^k_\Gamma$ of $k$ copies of $\Lambda_\Gamma$ embeds isometrically, with finite and odd index, in the standard negative definite lattice $\bZ^N$, where $N=k\sum_i y_i$. The embedding $\Lambda^k_\Gamma\subset\bZ^N$ gives rise to a positive circular subset $\mathcal V\subset\bZ^N$ such that $v\cdot v\leq -2$ for each $v\in\mathcal V$. By Theorem~\ref{t:pcs}, the string $S$ of negative integers of Equation~\eqref{e:string} falls under Case~$(1)$ or Case~$(2)$ of the statement of that theorem. We will now verify that in Case~$(1)$ the knot $K$ belongs to Family~$(2)$ of Theorem~\ref{t:main}. We shall argue by induction on the length of the string $S$. Observe first that $L_a=\widehat\beta_a$, where \[ \beta_a := \sigma_2\sigma_1\sigma_2^{-1} a \sigma_2 \sigma_1^{-1}\sigma_2^{-1} a^{-1} \in B_3. \] The 3--braid $\beta_a$ is conjugate to $\beta'_a = \sigma_2^{-2} a \sigma_2^2 \Delta^{-1} a^{-1} \Delta$, where $\Delta = \sigma_1\sigma_2\sigma_1\in B_3$ is the Garside element. To prove the basis of the induction, note that the knot corresponding to $(-2,-2,-5)$ is the closure of $\sigma_2^{-3}\sigma_1^3\in B_3$, and we may also write this braid as \[ \beta'_{\sigma_2^{-1}\sigma_1^2} = \sigma_2^{-2} (\sigma_2^{-1}\sigma_1^2) \sigma_2^2 \Delta^{-1} (\sigma_2^{-1}\sigma_1^2)^{-1}\Delta. \] This shows that $K$ belongs to Family~$(2)$. To prove the inductive step, we first consider the two (-2)--expansions of $(-2,-2,-5)$, which are $(-2,-2,-2,-6)$ and $(-2,-3,-5,-2)$. As one can easily check, the knot corresponding to the first expansion is the closure of the braid obtained from $\beta'_{\sigma_2^{-1}\sigma_1^2}$ by inserting $\sigma_1\sigma_2^{-1}$ immediately before the factor $\sigma_2^{-2}$, while the knot corresponding to the second expansion is the closure of the braid obtained inserting $\sigma_2^{-1}\sigma_1$ immediately after the factor $\sigma_2^{-2}$. Observe that, in general, since $\Delta\sigma_1 = \sigma_2\Delta$, \[ \sigma_1\sigma_2^{-1} \beta'_a = \sigma_1\sigma_2^{-2} (\sigma_2^{-1} a) \sigma_2^2 \Delta^{-1} a^{-1} \Delta \sim \sigma_2^{-2} (\sigma_2^{-1} a) \sigma_2^2 \Delta^{-1} (\sigma_2^{-1} a)^{-1} \Delta = \beta'_{\sigma_2^{-1} a}, \] where $\sim$ denotes conjugation. Similarly, \[ \sigma_2^{-2} (\sigma_2^{-1}\sigma_1) a \sigma_2^2 \Delta^{-1} a^{-1} \Delta \sim \sigma_2^{-2} (\sigma_1 a) \sigma_2^2 \Delta^{-1} (\sigma_1 a)^{-1}\Delta = \beta'_{\sigma_1 a}. \] This shows that the knots corresponding to the two (-2)--expansions of $(-2,-2,-5)$ are of the form $\widehat\beta_{\sigma_2^{-2}\sigma_1^2}$ and $\widehat\beta_{\sigma_1\sigma_2^{-1}\sigma_1^2}$. In general, suppose that the string $(-m_1,\ldots, -m_k)$ was obtained by a sequence of (-2)--expansions from $(-2,-2,-5)$, and that the corresponding knot was the closure of a braid of the form $\beta'_a$. Arguing as before we see that the knots corresponding to the two (-2)--expansions of $(-m_1,\ldots, -m_k)$ are closures of braids obtained from $\beta'_a$ by inserting either $\sigma_1\sigma_2^{-1}$ immediately before the factor $\sigma_2^{-2}$ or $\sigma_2^{-1}\sigma_1$ immediately after the factor $\sigma_2^{-2}$. Exactly as before, the results are $\beta'_{\sigma_2^{-1} a}$ and $\beta'_{\sigma_1 a}$. This proves the inductive step and shows that if Case~$(1)$ of Theorem~\ref{t:pcs} holds, then $K$ belongs to Family~$(2)$ of Theorem~\ref{t:main}. The fact that the link $L_a$ is a symmetric union, hence ribbon, is evident by looking at Figure~\ref{f:symmunion}. Before considering the next case, observe that if $t=1$ then $x=y\geq 3$, and $K$ is the closure of $\sigma_1^x\sigma_2^{-x}$. In this case, the string $S$ given by Equation~\eqref{e:string} is of the form given in Case~$(1)$ of the statement of Theorem~\ref{t:pcs}. Therefore, by the above argument $K$ belongs to Family~$(2)$ of Theorem~\ref{t:main}. From now on we will assume that $t\geq 3$. Now suppose that the string $S$ given by Equation~\eqref{e:string} falls under Case~$(2)$ of Theorem~\ref{t:pcs}. This immediately implies that $K$ belongs to Family~(3) of Theorem~\ref{t:main}, and we are only left to show that $K$ is amphichiral. We start observing that the condition $(X,Y)=(Y,X')^\varphi$ translates, using the notations of Section~\ref{s:intro}, into \begin{equation}\label{e:phicondition} x_i = y_{\varphi_E(i)},\quad y_i = x_{\varphi_V(i+1)},\quad i=1,\ldots, t, \end{equation} where from now on `$i+1$' will mean `$1$' when $i=t$. Moreover, it is easily checked that if $\varphi$ is a rotation then \begin{equation}\label{e:phi=rot} \varphi_V(i+1) = \varphi_V(i)+1\quad \text{and}\quad \varphi_E(i) = \varphi_V(i), \end{equation} while if $\varphi$ is a reflection then \begin{equation}\label{e:phi=refl} \varphi_V(i+1),= \varphi_V(i)-1\quad \text{and}\quad \varphi_E(i) = \varphi_V(i)-1. \end{equation} By Equation~\eqref{e:phicondition}, the knot $K$ is the closure of \[ \prod_{i=1}^t \sigma_1^{x_i} \sigma_2^{-y_i} = \prod_{i=1}^t \sigma_1^{x_i} \sigma_2^{-x_{\varphi_V(i+1)}}, \] therefore $K^m$ is the closure of \[ \prod_{i=1}^t \sigma_1^{-x_i} \sigma_2^{x_{\varphi_V(i+1)}}, \] therefore (applying an isotopy) also the closure of \[ \prod_{i=1}^t \sigma_2^{-x_i} \sigma_1^{x_{\varphi_V(i+1)}} \sim \prod_{i=1}^t \sigma_1^{x_{\varphi_V(i+1)}} \sigma_2^{-x_{i+1}} \sim \prod_{i=1}^t \sigma_1^{x_{\varphi_V(i)}} \sigma_2^{-x_i} = \prod_{i=1}^t \sigma_1^{x_{\varphi_V(i)}} \sigma_2^{-y_{\varphi_E(i)}}. \] where `$\sim$' denotes conjugation and to obtain the equality we applied Equation~\eqref{e:phicondition}. If $\varphi$ is a rotation then $(\varphi_V(1),\ldots, \varphi_V(t)) = (1,2,\ldots, t)$, hence by Equation~\eqref{e:phi=rot} we have \[ \prod_{i=1}^t \sigma_1^{x_{\varphi_V(i)}} \sigma_2^{-y_{\varphi_E(i)}} = \prod_{i=1}^t \sigma_1^{x_{\varphi_V(i)}} \sigma_2^{-y_{\varphi_V(i)}} \sim \prod_{i=1}^t \sigma_1^{x_i} \sigma_2^{-y_i}, \] therefore $K^m$ is isotopic to $K$. If $\varphi$ is a reflection then $(\varphi_V(1),\ldots, \varphi_V(t)) = (t,t-1,\ldots,1)$, hence by Equation~\eqref{e:phi=refl} we have \[ \prod_{i=1}^t \sigma_1^{x_{\varphi_V(i)}} \sigma_2^{-y_{\varphi_E(i)}} = \prod_{i=1}^t \sigma_1^{x_{\varphi_V(i)}} \sigma_2^{-y_{\varphi_V(i)-1}} \sim \prod_{i=1}^{t-1} \sigma_1^{x_{t+1-i}} \sigma_2^{-y_{t-i}}\cdot \sigma_1^{x_1}\sigma_2^{-y_t} \sim \prod_{i=0}^{t-1} \sigma_2^{-y_{t-i}} \sigma_1^{x_{t-i}}, \] therefore $K^m$ is isotopic to $-K$. This concludes the proof of Theorem~\ref{t:main}. \bibliographystyle{amsplain}
{ "timestamp": "2016-11-09T02:07:02", "yymm": "1504", "arxiv_id": "1504.03129", "language": "en", "url": "https://arxiv.org/abs/1504.03129" }
\section{Introduction} \label{sec:intro} Dark matter (DM) is estimated to account for about 23\% of the total mass of the universe, and to be five times more abundant than the known baryonic matter. While the existence of DM is inferred from astrophysical observations, there is very little information about its nature or how it interacts with ordinary matter. In this paper, we consider a simplified scenario~\cite{Maverick,JHEP10081,PhysRevD.82.116010} in which DM has a particle explanation and, in particular, there is only one new Dirac fermion related to DM within the energy reach of the LHC. The fermion interacts with quarks via a four-fermion contact interaction, which can be described by an effective field theory (EFT) Lagrangian: \begin{equation} L_{\text{int}} = \sum_{q} \sum_{i} C_{q\,i}~ \bigl(\overline{q}\Gamma_{i}^{q}{\mathrm{q}}\bigr)\bigl(\overline{\chi}\Gamma_{i}^{\chi}\chi\bigr), \label{eft} \end{equation} where $C$ represents the coupling constant, which usually depends on the scale of the interaction ($M_{*}$). The operator $\Gamma$ describes the type of the interaction, including scalar ($\Gamma=1$), pseudoscalar ($\Gamma=\gamma^{5}$), vector ($\Gamma=\gamma^{\mu}$), axial vector ($\Gamma=\gamma^{\mu}\gamma^{5}$), and tensor interactions ($\Gamma=\sigma^{\mu\nu}$). The exact value of the constant $C$ depends on the particular type of the interaction. This scenario can lead to the production of DM particles in association with a hard parton, a photon, or a W or Z boson. The first two production modes are usually referred to as monojets~\cite{Maverick,PhysRevD.82.116010,PhysRevD.85.056011,PhysRevD.86.015010,PhysRevD.84.095013} and monophotons~\cite{PhysRevD.85.056011}, respectively. Recent monojet results from the ATLAS~\cite{atlas7tev} and CMS~\cite{CMS:rwa} Collaborations have placed lower limits on $M_{*}$ for some typical couplings in Eq.~(\ref{eft}). The ATLAS Collaboration~\cite{PhysRevLett.112.041802} has also searched for DM particles in events with a hadronically decaying W or Z boson. Assuming a DM particle with a mass of 100\GeV, the excluded interaction scales are below about 60\GeV~\cite{PhysRevLett.112.041802}, 1040\GeV~\cite{CMS:rwa}, 1010\GeV~\cite{CMS:rwa}, and 2400~\cite{PhysRevLett.112.041802}\GeV for scalar, vector, axial-vector, and tensor interactions, respectively, and the excluded scale is below 410\GeV~\cite{CMS:rwa} for a scalar interaction between DM particles and gluons. The exclusion limit for a scalar interaction between DM particles and quarks is the least stringent among all the interaction types that have been probed. In this interaction the coupling strength is proportional to the mass of the quark: \begin{equation} L_{\text{int}} = \frac{m_{\PQq}}{M^{3}_{*}} \overline{q}{q}\overline{\chi}\chi. \label{d1} \end{equation} As a consequence, couplings to light quarks are suppressed. A recent paper~\cite{PhysRevD.88.063510} suggested that the sensitivity to the scalar interaction can be improved by searching in final states with third-generation quarks. It has also been noted that the inclusion of heavy quark loops in the calculation of monojet production ~\cite{Haisch:2012kf} increases the expected sensitivity. In this paper, we report on a search for the production of DM particles in association with a pair of top quarks, and consider only the scalar interaction. The ATLAS Collaboration has recently searched for DM particles in association with heavy quarks~\cite{ATLAS_heavyQuarks}, placing more stringent limits on the scalar interaction between DM particles and quarks than the mono-W/Z search~\cite{PhysRevLett.112.041802}. Assuming a DM particle with a mass of 100\GeV, the excluded interaction scale is 120\GeV for scalar interaction between top quarks and DM particles. Figure~\ref{fig:ggtottxx} shows the dominant diagram for this production at the LHC. In this paper we focus our search on events with one lepton (electron or muon) in the final state. \begin{figure}[tbh] \begin{center} \includegraphics[width=0.35\textwidth]{figs/ggchichiv1_largefont_noarrow.pdf} \caption{Dominant diagram contributing to the production of DM particles in association with top quarks at the LHC.} \label{fig:ggtottxx} \end{center} \end{figure} \section{The CMS Detector} \label{sec:CMS} The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}. Within the field volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. Extensive forward calorimetry complements the coverage provided by the barrel and endcap detectors. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}. \section{Data and simulated samples} The data used in this search were recorded with the CMS detector at the LHC at $\sqrt{s}= 8$\TeV, and correspond to an integrated luminosity of $19.7\fbinv$. The data were collected using single-electron and single-muon triggers, with transverse momentum (\pt) thresholds of 27 and 24\GeV, respectively. The efficiencies of these triggers in data and simulation are compared, measured using a tag-and-probe method~\cite{tagAndProbe}, and correction factors are applied to the simulation. DM signals are generated with \MADGRAPH v5.1.5.11~\cite{madgraph} leading order (LO) matrix element generator using the CTEQ6L1 parton distribution functions (PDF)~\cite{Pumplin:2002vw}. The dominant standard model (SM) background processes for this search are \ttbar{}+jets, $\ttbar+\gamma/\PW/\Z$, \PW+jets, single top quark, diboson (WW, WZ, and ZZ) and Drell--Yan events. All of these backgrounds except single top quark and WW events, are generated with the \MADGRAPH using CTEQ6L1 PDF. The top-quark \pt distributions in the \ttbar{}+jet sample generated from \MADGRAPH are reweighted to match the CMS measurements, following the method described in Ref.~\cite{topPt}. Single top quark processes are generated with the next-to-LO (NLO) generator \POWHEG v1.0 using the CTEQ6M PDF~\cite{Pumplin:2002vw}. The WW background is generated with the \PYTHIA v6.424~\cite{pythia}. All events generated with \MADGRAPH are matched to the \PYTHIA~\cite{pythia} parton shower description. All events are passed through the detailed simulation of the CMS detector based on {\sc GEANT4} v9.4~\cite{Agostinelli:2002hh}. The cross sections of \ttbar{}+jets~\cite{Czakon:2013goa} and $\PW/\Z$+jets~\cite{Quackenbush2013209} backgrounds are calculated at next-to-NLO. Other backgrounds are calculated at NLO. The single top quark cross section is taken from Ref.~\cite{Kidonakis:2012db}, the $\ttbar+\Z$ cross section from Ref.~\cite{ttWttZ_NLO}, the $\ttbar+\PW$ cross section from Ref.~\cite{ttW_NLO}, the $\ttbar+\gamma$ cross section from Ref.~\cite{PhysRevD.83.074013} and the diboson cross sections are from Ref.~\cite{VV_LHC}. Additional minimum bias events in the same LHC bunch crossing (pileup) are added to all simulated events, with a distribution in number matching that observed in data. \section{Object reconstruction} A particle-flow (PF) based event reconstruction~\cite{PFT-09-001,PFT-10-001} is used by CMS, which takes into account information from all subdetectors, including charged-particle tracks from the tracking system and deposited energy from the ECAL and HCAL. Given this information, all particles in the event are classified into mutually exclusive categories: electrons, muons, photons, charged hadrons, and neutral hadrons. Primary vertices are reconstructed using a deterministic annealing filter algorithm~\cite{Chabanat:865587}, with the event primary vertex defined as the vertex with the largest sum of the squares of the \pt of the tracks associated with that vertex. Electron candidates are reconstructed from energy clusters in the ECAL matched with tracks ~\cite{epjcs10052-006-0175-5}. The electron trajectory in the tracker volume is reconstructed with a Gaussian sum filter~\cite{gsf} algorithm that takes into account the possible emission of bremsstrahlung photons in the silicon tracker. The electron momentum is then determined from the combination of ECAL and tracker measurements. Electrons are identified by placing requirements on the ECAL shower shape, the matching between the tracker and the ECAL, the relative energy fraction deposited in HCAL and ECAL, the transverse and longitudinal impact parameters of the tracker track with respect to the event primary vertex, photon conversion rejection, and the isolation variable $R^{\Pe}_{\text{Iso}}$. The isolation variable is defined as the ratio to the electron transverse momentum, of the sum of \pt of all other PF candidates reconstructed in a cone of radius $\Delta R=\sqrt{\smash[b]{(\Delta\eta)^{2} + (\Delta\phi)^{2}}} = 0.3$ around the electron candidate, where $\eta$ is the pseudorapidity and $\phi$ is the azimuthal angle. The \pt sum in the isolation cone is corrected for the contributions of pileup interactions on an event-by-event basis. Isolated electrons satisfy $R^{\Pe}_{\text{Iso}}<0.1$. The electron is required not to be in the transition region between the barrel and the endcap ECAL ($1.44<\abs{\eta}<1.57$) because the reconstruction of an electron object in this region is not optimal~\cite{epjcs10052-006-0175-5}. After all these requirements, electrons are selected if they satisfy $\pt > 30$\GeV and $\abs{\eta} <2.5$. Muon candidates are reconstructed by combining tracks from the tracker and muon system~\cite{1748-0221-7-10-P10002}, resulting in ``global-muon tracks''. The PF muons are selected among reconstructed muon track candidates by imposing minimal requirements on the track components in the muon system and taking into account matching with small energy deposits in the calorimeters~\cite{PFT-09-001,PFT-10-001}. Muons from cosmic rays and from light hadrons that decay in flight, or from b hadrons, and hadrons misidentified as muons are suppressed by applying requirements on the quality of the global-muon fit, the number of hits in the muon detector and in the tracker, the transverse and longitudinal impact parameters of the tracker track with respect to the event primary vertex, and the isolation variable. The muon isolation variable ($R^{\mu}_{\text{Iso}}$) is defined in a similar manner to that for electrons, but with a cone of radius $\Delta R=0.4$. Isolated muons must satisfy $R^{\mu}_{\text{Iso}}<0.12$. After all these requirements, muons are selected if they satisfy $\pt > 30$\GeV and $\abs{\eta} <2.1$. Both electron and muon identification efficiencies are measured via the tag-and-probe technique using inclusive samples of $\Z\to\ell^{+}\ell^{-}$ events from data and simulation. Correction factors are used to account for the difference in performance of the lepton identification between data and simulation. Jets are reconstructed from PF candidates that are clustered with the anti-\kt algorithm~\cite{Cacciari:2008gp} with a distance parameter of 0.5, using the \FASTJET package~\cite{Cacciari:2011ma}. Jet energy scale corrections obtained from data and simulation are applied to account for the response function of the combined calorimetry to hadronic showers and pileup effects~\cite{Soyez:2012hv,Perloff:2012wpa}. The jet \pt resolution in simulation is adjusted to match that measured in data~\cite{JECpaper}. Jet candidates are required to have $\pt>30$\GeV and $\abs{\eta} < 4.0$, and to satisfy a very loose set of quality criteria~\cite{JECpaper}. The combined secondary vertex (CSV) b-tagging algorithm~\cite{BTAGpaper} is used to identify jets from the hadronization of b quarks. The CSV algorithm exploits the large impact parameters and probable presence of a displaced vertex which are common in b-quark-initiated jets. This information is combined in a likelihood discriminant providing a continuous output between 0 and 1. In this search, a selected jet is considered to be b-tagged if it has a CSV discriminant value greater than 0.679 and $\abs{\eta} <2.4$. The b-tagging efficiency is approximately 70\% (20\%) for jets originating from a b (c) quark and the mistagging probability for jets originating from light quarks or gluons is approximately 2\%. An event-by-event correction factor is applied to simulated events to account for the difference in performance of the b-tagging between data and simulation~\cite{bTagSF}. Missing transverse energy (\MET) is measured as the magnitude of the vectorial \pt sum of all PF candidates, taking into account the jet energy corrections. \section{Event selection} \label{selection} In semileptonic $\ttbar$ decays, two b quarks and two light quarks are produced. Therefore most of the selected signal events contain at least four jets. However, we set the requirement to be three or more rather than four or more identified jets in an event, since this is found to improve the search sensitivity by 10\%. In addition, we require at least one b-tagged jet (``b jet'') in the event, and only one identified isolated lepton. Signal events usually have larger $\MET$ than the backgrounds because of two DM particles, neither of which leave any energy in the detector. Events are therefore required to have $\MET > 160$\GeV. These selection criteria are referred to as the ``preselection''. After preselection, the dominant backgrounds are from \ttbar and W+jets production. Other backgrounds include single top, Drell--Yan and diboson production. The QCD multijet contribution to the background is negligible because of the requirements of a high-\pt isolated lepton, large $\MET$, and a b-tagged jet. To improve the search sensitivity, we further select events with $\MET> 320$\GeV. The remaining \PW+jets and most \ttbar backgrounds contain a single leptonically decaying W boson. The transverse mass, defined as $\mt\equiv \sqrt{\smash[b]{2\MET \pt^\ell(1-\cos(\Delta\phi))}}$, where $\pt^\ell$ is the transverse momentum of the lepton and $\Delta\phi$ is the opening angle in azimuth between the lepton and $\ptvecmiss$ vector, is constrained kinematically to $\mt<M_{\PW}$ for the on-shell W boson decay in the \ttbar and \PW+jets events. For signal events, off-shell W boson decays, and \ttbar dilepton decay channel, \mt can exceed $M_{\PW}$. Therefore a requirement of $\mt> 160$\GeV is applied to increase the discrimination of the background relative to the signal. The dominant background with large \mt arises from dileptonic \ttbar events where one of the leptons is unobserved, illustrated in Fig.~\ref{fig:mt2w}. The $M_{\mathrm{T2}}^{\PW}$ variable~\cite{MT2W} is exploited to further reduce this type of background. This variable is defined as the minimal ``parent'' particle mass compatible with all the transverse momenta and mass-shell constraints, assuming two identical parent particles, each of mass $m_\mathrm{y}$, decaying to bW: \begin{equation} M_{\mathrm{T2}}^{\PW} = \min \left( m_{\mathrm{y}} \text{ consistent with: } \left\{ \begin{aligned} &\vec{p}_{1}^{\mathrm{T}} + \vec{p}_{2}^{\mathrm{T}} = \ptvecmiss, p_{1}^{2}=0, (p_{1} + p_{\ell})^{2} = p_{2}^{2} = M^{2}_{\PW}, \\ &(p_{1}+p_{\ell}+p_{\mathrm{b1}})^{2} = (p_{2} + p_{\mathrm{b2}})^{2} = m^{2}_{\mathrm{y}} \end{aligned} \right\} \right), \label{eq:mt2w} \end{equation} where the momentum of the W boson that decays to an unreconstructed lepton is indicated by $p_2$, and the momentum of the neutrino from the decay of the other W boson is indicated by $p_1$. In particular, the intermediate W bosons are assumed to be on-shell, thus adding more kinematic information to suppress dileptonic \ttbar events where one lepton is lost. In \ttbar events, the $M_{\mathrm{T2}}^{\PW}$ distribution has a kinematic end-point at the top-quark mass, assuming perfect measurements with the detector. By contrast, this is not the case for signal events where two additional DM particles are present. The calculation of $M_{\mathrm{T2}}^{\PW}$ requires that at least two b jets be identified and be paired correctly to the lepton. When only one b jet is selected, each of the first three remaining highest \pt jets is considered as the second b jet. When two or more b jets are selected, all the b jets in the event are used. The $M_{\mathrm{T2}}^{\PW}$ value is then calculated for all possible jet-lepton combinations and the minimum value is taken as the event discriminant. We select events with $M_{\mathrm{T2}}^{\PW} > 200$\GeV. \begin{figure}[tbh] \begin{center} \includegraphics[width=0.28\textwidth]{figs/MT2W.png} \caption{Schematic of a dileptonic \ttbar event where only one lepton is reconstructed~\cite{MT2W}. This represents the dominant type of \ttbar background to this search. The momentum of the W boson that decays to an unreconstructed lepton is indicated by $p_2$, and the momentum of the neutrino from the decay of the other W boson is indicated by $p_1$. The same notation is used in Eq.~(\ref{eq:mt2w}).} \label{fig:mt2w} \end{center} \end{figure} In addition, the jets and the $\ptvecmiss$ tend to be more separated in $\phi$ in signal events than in \ttbar background. We therefore require the minimum opening angle in $\phi$ between each of the first two leading jets and $\ptvecmiss$ to be larger than 1.2. In summary, the signal region (SR) for our search is $\MET>320$\GeV, $\mt>160$\GeV, $M_{\mathrm{T2}}^{\PW} > 200$\GeV and min $\Delta\phi(j_{1,2},\ptvecmiss) > 1.2$. These selection criteria are optimized based on the expected significance for DM masses between 1 and 1000\GeV. Figure~\ref{fig:cut26} shows the distributions of \MET, $\mt$, $M_{\mathrm{T2}}^{\PW}$, and min $\Delta\phi(j_{1,2},\ptvecmiss)$ after applying all other selections except the one plotted, indicating their power of discrimination between signal and background. In these distributions, the \ttbar{}+jets and \PW+jets backgrounds have been adjusted by the scale factors (SF), as described in section~\ref{sec:background}. \begin{figure}[phbt] \begin{center} \includegraphics[width=0.45\textwidth]{figs/n-1/met.pdf} \includegraphics[width=0.45\textwidth]{figs/n-1/mt.pdf} \includegraphics[width=0.45\textwidth]{figs/n-1/mt2w.pdf} \includegraphics[width=0.45\textwidth]{figs/n-1/mindphij1j2met.pdf} \caption{Distributions of $\MET$, $\mt$, $M_{\mathrm{T2}}^{\PW}$, and min $\Delta\phi(j_{1,2},\ptvecmiss)$ after applying SFs for \ttbar{}+jets and \PW+jets backgrounds, as described in section~\ref{sec:background}. Each distribution is plotted after applying all other selections, which are indicated by the arrows on the relevant distributions. Two simulated DM signals with mass $M_{\chi}$ of 1 and 600\GeV and an interaction scale $M_{*}$ of 100\GeV are included for comparison. The hatched region represents the total uncertainty in the background prediction. The last bin of the \MET, \mt and $M_{\mathrm{T2}}^{\PW}$ distributions includes the overflow. The horizontal bar on each data point indicates the width of the bin.} \label{fig:cut26} \end{center} \end{figure} \section{Background estimation} \label{sec:background} Standard model backgrounds are estimated from simulation, with data-to-simulation SFs applied to the dominant backgrounds from \ttbar{}+jets and \PW+jets. Two control regions (CR) are defined to extract these SFs. One is the preselection with the additional requirement of $\mt>160\GeV$ (CR1). The sample in CR1 is dominated by \ttbar{}+jets background. The other (CR2) is defined the same way as CR1 except that no jet should satisfies the b-tag requirement, resulting in a sample enriched in \PW+jets events. The subdominant backgrounds are subtracted from the distributions observed in data in order to obtain a data sample that has only \ttbar{}+jets and \PW+jets background contributions. The \ttbar{}+jets and \PW+jets SFs are then obtained by matching simultaneously to data the $\mt$ distribution in CR1 and the $\MET$ distribution in CR2. The obtained SFs for \ttbar{}+jets and \PW+jets are $1.11\pm0.02$\stat and $1.26\pm0.06$\stat, respectively. These SFs are propagated to the SR to estimate the background. The level of DM signal contamination in the two CRs is estimated to be small and therefore has negligible impact on the background estimation in the SR. Figures~\ref{fig:afterFitCR1} and~\ref{fig:CR2} show the distributions of $\MET$, $\mt$, $M_{\mathrm{T2}}^{\PW}$, and min $\Delta\phi(j_{1,2},\ptvecmiss)$ with the SFs applied in CR1 and CR2, respectively. The data are in good agreement with expectations from SM background. \begin{figure}[p!hbt] \begin{center} \includegraphics[width=0.45\textwidth]{figs/met160mt160/met.pdf} \includegraphics[width=0.45\textwidth]{figs/met160mt160/mt.pdf} \includegraphics[width=0.45\textwidth]{figs/met160mt160/mt2w.pdf} \includegraphics[width=0.45\textwidth]{figs/met160mt160/mindphij1j2met.pdf} \caption{Distributions of $\MET$, $\mt$, $M_{\mathrm{T2}}^{\PW}$, and min $\Delta\phi(j_{1,2},\ptvecmiss)$ in CR1 after applying the SFs for \ttbar{}+jets and \PW+jets backgrounds, as described in section~\ref{sec:background}. Two simulated DM signals with mass $M_{\chi}$ of 1 and 600\GeV and an interaction scale $M_{*}$ of 100\GeV are included for comparison. The hatched region represents the total uncertainty in the background prediction. The error bars on the data-to-background ratio take into account both the statistical uncertainty in data and the total uncertainty in the background prediction. The last bin of the \MET, \mt, and $M_{\mathrm{T2}}^{\PW}$ distributions includes the overflow. The horizontal bar on each data point indicates the width of the bin.} \label{fig:afterFitCR1} \end{center} \end{figure} \begin{figure}[phbt] \begin{center} \includegraphics[width=0.45\textwidth]{figs/met160mt160_0btag/met.pdf} \includegraphics[width=0.45\textwidth]{figs/met160mt160_0btag/mt.pdf} \includegraphics[width=0.45\textwidth]{figs/met160mt160_0btag/mt2w.pdf} \includegraphics[width=0.45\textwidth]{figs/met160mt160_0btag/mindphij1j2met.pdf} \caption{Distributions of $\MET$, $\mt$, $M_{\mathrm{T2}}^{\PW}$, and min $\Delta\phi(j_{1,2},\ptvecmiss)$ in CR2 after applying the SFs for \ttbar{}+jets and \PW+jets backgrounds, as described in section~\ref{sec:background}. Two simulated DM signals with mass $M_{\chi}$ of 1 and 600\GeV and an interaction scale $M_{*}$ of 100\GeV are included for comparison. The hatched region represents the total uncertainty in the background prediction. The error bars on the data-to-background ratio take into account both the statistical uncertainty in data and the total uncertainty in the background prediction. The last bin of the \MET, \mt, and $M_{\mathrm{T2}}^{\PW}$ distributions includes the overflow. The horizontal bar on each data point indicates the width of the bin.} \label{fig:CR2} \end{center} \end{figure} \section{Systematic uncertainties} \label{sys} The normalization and shape of the distributions used to establish a possible DM signal are subject both to experimental and theoretical uncertainties. The data-to-simulation SFs for \ttbar{}+jets and \PW+jets are extracted from the CRs, as described in the previous section. For the background estimation, the use of SFs largely removes the uncertainties from the integrated luminosity, lepton identification and trigger efficiencies, and from cross sections of the two backgrounds. Other systematic uncertainties can be constrained by refitting the data in the CRs, as described in the following. The \ttbar{}+jets and \PW+jets SFs are obtained from CRs in which other backgrounds are present as well. We conservatively assign a 50\% uncertainty for other backgrounds to account for possible missing higher order terms as well as mismodelling of kinematic properties from the simulation. This uncertainty results in a change of 5\% and 9\% for the \ttbar{}+jets and \PW+jets SFs, respectively. Propagating these changes to the SR, the impact on the total background prediction is found to be 10\%. The stability of the SFs is checked through changes in the definitions of the CRs. These include tightening the $\MET$ requirement or applying selections on $M_{\mathrm{T2}}^{\PW}$, and min $\Delta\phi(j_{1,2},\ptvecmiss)$. An uncertainty of 40\% for the \PW+jets SF is assigned from these CR tests. No significant change is observed in the SF for \ttbar{}+jets. The \pt distributions of top quarks in the \ttbar{}+jets simulation is reweighted to match the data. The reweighting uncertainty is estimated by changing the nominal reweighting factor to unity or to the square of the reweighting factor, resulting in a change of $\pm$14\% for the \ttbar{}+jets SF and only negligible impact on the \PW+jet SF. Propagating these SFs to the SR, a systematic uncertainty of 10\% is estimated for the \ttbar{}+jets background prediction from the reweighting. The stability of the \ttbar{}+jets background prediction is also checked by varying the \MADGRAPH factorization and renormalization scale parameters, or the scale parameter for the matrix element and parton shower matching, by a factor of two. The resulting predictions are consistent with the nominal \ttbar{}+jets background prediction. The remaining dominant experimental systematic uncertainties are from corrections in jet energy scale and resolution. Correction factors are separately varied by $\pm$1 standard deviation and $\MET$ is recalculated accordingly. These changes in the jet energy scale and resolution correction factors contribute uncertainties of 4\% and 3\% in the estimate of the background, respectively. The uncertainties in the background yield due to b-tagging correction factors are estimated to be 1.0\% and 1.8\% for heavy-flavour and light-flavour jets, respectively. The uncertainty in the pileup model contributes an uncertainty of 2.0\% in the background estimate. The theoretical uncertainty related to the choice of the PDF set is evaluated by reweighting the background samples using three PDF sets: CT10~\cite{CT10}, MWST2008~\cite{MSTW2008}, and NNPDF2.3 ~\cite{NNPDF2.3}, following the PDF4LHC recommendation~\cite{PDF4LHC,Botje:2011sn}. For each PDF set, an uncertainty band is derived from the different error PDF sets, including the uncertainties due to the strong coupling constant $\alpha_{\mathrm{S}}$. The envelope of these three error bands is taken as the PDF uncertainty, which leads to a 2.6\% uncertainty in the background estimate. Table~\ref{tab:bkgRelErr} summarizes the systematic uncertainties and their impact on the background prediction in the SR. The following sources of systematic uncertainty associated with the signal expectation are taken into account. The integrated luminosity is measured with precision of 2.6\%~\cite{LUM-13-001}. Lepton trigger and identification efficiencies are measured with a precision of 2\% and 1\%, respectively. Uncertainties in the jet energy scale and resolution correction factors yield uncertainties of 2--3\% and less than 1\%, respectively, depending on the mass hypotheses for the DM particle. Uncertainties in the b-tagging correction factors for heavy-flavour and light-flavour jets yield uncertainties of 3--4\% and less than 1\%, respectively. \begin{table}[htb] \centering \topcaption{Systematic uncertainties from various sources and their impact on the total background prediction. } \begin{tabular}{ c | c } \hline \multirow{2}{*}{Source of systematic uncertainties} & Relative uncertainty on \\ & total background (\%) \\ \hline 50\% normalization uncert. of other bkg in deriving SFs & 10 \\ $\mathrm{SF}_{\PW\text{+jets}}$ (CR tests) & 13 \\ \ttbar{}+jets top-quark \pt reweighting & 3.9 \\ Jet energy scale & 4.0 \\ Jet energy resolution & 3.0 \\ b-tagging correction factor (heavy flavour) & 1.0 \\ b-tagging correction factor (light flavour) & 1.8\\ Pileup model & 2.0 \\ PDF & 2.6 \\ \hline \end{tabular} \label{tab:bkgRelErr} \end{table} \section{Results} \label{res} Table~\ref{tab:bkg_finalcut} lists the number of events observed in the SR, along with the background prediction and expected number of signal events for a DM particle with mass of $M_{\chi}=1\GeV$ and an interaction scale $M_{*}= 100\GeV$. We observe no excess of events in the SR and set 90\% confidence level (CL) upper limits on the production cross section of DM particles in association with a pair of top quarks. The choice of 90\% CL is made in order to allow direct comparisons with related limits from astrophysical observations. A modified-frequentist $\mathrm{CL_{\mathrm{s}}}$ method~\cite{Read1,junkcls} is used to evaluate the upper limits, with both statistical and systematic uncertainties taken into account in the limit setting. \begin{table}[htb] \centering \topcaption{Expected number of background events in the SR, expected number of signal events for a DM particle with the mass $M_{\chi}= 1\GeV$, assuming an interaction scale $M_{*}= 100\GeV$, and observed data. The statistical and systematic uncertainties are given on the expected yields.} \begin{tabular}{c|c} \hline Source & Yield ($\pm$stat $\pm$syst) \\ \hline $\ttbar$ & $8.2\pm0.6\pm1.9$ \\ W & $5.2\pm1.8\pm2.1$ \\ Single top & $2.3\pm1.1\pm1.1$ \\ Diboson & $0.5\pm0.2\pm0.2$ \\ Drell--Yan & $0.3\pm0.3\pm0.1$ \\ \hline Total Bkg & $16.4\pm2.2\pm2.9$ \\ Data & 18 \\ \hline \end{tabular} \label{tab:bkg_finalcut} \end{table} Table~\ref{tab:signal_eff} shows the signal efficiencies and the observed and expected upper limits on the $\Pp\Pp\to \ttbar+\chi\overline{\chi}$ production cross section for seven mass hypotheses of the DM particle. The relatively low values of signal efficiencies of 1--3\% are mostly due to the requirement of $\MET>320$\GeV. Cross sections larger than 20 to 55\unit{fb} are excluded at 90\% CL for DM particles with mass ranging from 1 to 1000\GeV. Interpreting the results in the context of a scalar interaction between DM particles and top quarks, we set lower limits on the interaction scale $M_{*}$, shown in Fig.~\ref{fig:limit}. Assuming a DM particle with a mass of 100\GeV, values of the interaction scale below 118\GeV are excluded at 90\% CL. \begin{table}[htb] \centering \renewcommand{\arraystretch}{1.3} \topcaption{Expected number of signal events in SR assuming an interaction scale $M_{*}= 100\GeV$, signal efficiencies, and observed and expected limits at 90\% CL on production cross sections for $\Pp\Pp\to \ttbar+\chi\bar{\chi}$, for various DM particle masses.} \begin{tabular}{c|c|c|c|c} \hline $M_{\chi}$ (\GeVns) &Yield ($\pm$stat $\pm$syst) & Signal efficiency (\%) ($\pm$stat $\pm$syst) & $\sigma^{\lim}_{\text{exp}}$\,(fb) & $\sigma^{\lim}_{\text{obs}}$\,(fb) \\ \hline 1 &$38.3\pm0.7\pm2.1$ & $1.01\pm0.02\pm0.05$ & $47^{+21}_{-13}$ & 55 \\ 10 &$37.8\pm0.7\pm2.1$ &$1.01\pm0.02\pm0.05$ & $46^{+21}_{-13}$ & 54 \\ 50 &$35.1\pm0.6\pm1.9$ & $1.20\pm0.02\pm0.06$ & $39^{+18}_{-11}$ & 45 \\ 100 &$30.1\pm0.4\pm1.7$ & $1.46\pm0.02\pm0.07$ & $32^{+14}_{-9}$ & 37 \\ 200 &$18.0\pm0.2\pm1.0$ & $1.73\pm0.02\pm0.08$ & $27^{+12}_{-8}$ & 32 \\ 600 &$1.26\pm0.02\pm0.07$ & $2.40\pm0.03\pm0.11$ & $19^{+9}_{-6}$ & 23 \\ 1000 &$0.062\pm0.001\pm0.003$ & $2.76\pm0.04\pm0.13$ & $17^{+8}_{-5}$ & 20 \\ \hline \end{tabular} \label{tab:signal_eff} \end{table} \begin{figure}[hbt] \begin{center} \includegraphics[width=0.56\textwidth]{figs/cls_1l.pdf} \caption{Observed exclusion limits in the plane of DM particle mass and interaction scale, with the region below the solid curve excluded at a 90\% CL. The background-only expectations are represented by their median (dashed line) and by the 68\% and 95\% CL bands. A lower bound of the validity of the EFT is indicated by the upper edge of the hatched area. The four curves, corresponding to different g and R values, represent the lower bound on $M_{*}$ for which 50\% and 80\% of signal events have a pair of DM particles with an invariant mass less than $g\sqrt{M^3_{*}/m_{\PQt}}$, where $g=4\pi$ and $g=2\pi$ respectively. These curves indicate further restrictions on the applicability of EFT, as explained in the text.} \label{fig:limit} \end{center} \end{figure} As shown in Eq.~(\ref{eft}), DM production is modeled by an EFT, an approximation that has some important limitations. Firstly, the EFT approximation is only valid when the momentum transfer $Q_{\text{tr}}$ is small compared to the mediator mass. Secondly, the couplings should not exceed the perturbative limit. Unfortunately, both of these conditions depend on the details of the unknown new physics being approximated by the EFT. For example, if we consider a model with $s$-channel exchange between the top quarks and the DM particles and a coupling equal to the perturbative limit $g\equiv\sqrt{g_{\chi}g_{\PQt}}=4\pi$, where $g_{\chi}$ and $g_{\PQt}$ are the coupling constants of the mediator to DM particles and top quarks, respectively, then we can derive a lower bound on $M_*$, $\sqrt{M^{3}_{*} / m_{\PQt}}>M_{\chi} / 2\pi$, where $m_{\PQt}$ is the mass of the top quark~\cite{PhysRevD.82.116010,ning}. The region of parameter space in the exclusion plane that does not meet the perturbative condition for the validity of the EFT is indicated by the hatched area in Fig.~\ref{fig:limit}. In addition to this minimal requirement, we also test the validity of the EFT approximation with respect to the momentum transfer condition. For the same $s$-channel mediator scenario, $Q_{\text{tr}}$ is estimated as the invariant mass of two DM particles ($M_{\chi\overline{\chi}}$) as shown in Fig.~\ref{fig:Mchichi}. The EFT approximation is then valid if $M_{\chi\overline{\chi}} < g \sqrt{M^{3}_{*} / m_{\PQt}}$. The fraction of simulated signal events that satisfy this requirement (R) is reported for given values of $g$ and $M_{*}$. For $g=4\pi$ and $g=2\pi$, contours are overlaid in Fig.~\ref{fig:limit} that indicate where in the exclusion plane 50\% or 80\% of simulated signal events passing the analysis selection criteria satisfy the momentum transfer condition. If instead of drawing such a contour we fix $M_{*}$ at the 90\% CL lower limit obtained in this analysis, then 89\% (46\%) of simulated signal events passing the analysis selection criteria satisfy the momentum requirement for $g=4\pi (2\pi)$ and $M_\chi=1$\GeV. These fractions drop to 63\% (5\%) for $M_\chi=200$\GeV. No simulated signal events passing the analysis selection criteria are found to satisfy this requirement for $M_\chi > 600 $\GeV. For these reasons, the 90\% CL constraints on $M_{*}$ obtained in this analysis cannot be considered generally applicable, but should only be interpreted in models with large DM coupling. \begin{figure}[hbt] \begin{center} \includegraphics[width=0.56\textwidth]{figs/Mchichi.pdf} \caption{Invariant mass of two DM particles M$_{\chi\bar{\chi}}$ in selected signal events, for several DM mass hypotheses.} \label{fig:Mchichi} \end{center} \end{figure} The limits on the interaction scale $M_{*}$ can be translated to limits on the DM-nucleon scattering cross section~\cite{PhysRevD.82.116010}. Figure~\ref{fig:limitdmxs} shows the observed 90\% CL upper limits on the DM-nucleon cross section as a function of the DM mass for the scalar operator considered in this paper. More stringent limits are obtained relative to current direct DM searches in the mass region of less than $\approx$6\GeV. In this region, DM-nucleon cross sections larger than 1--$2\times10^{-42}\unit{cm}^{2}$ are excluded. \begin{figure}[hbt] \begin{center} \includegraphics[width=0.56\textwidth]{figs/d1_xs.pdf} \caption{The 90\% CL upper limits on the DM-nucleon spin-independent scattering cross section ($\sigma^{\mathrm{SI}}_{\chi\mbox{-}\mathrm{N}}$) as a function of the DM particle mass for the scalar operator considered in this paper. Also shown are 90\% CL limits from various direct DM search experiments~\cite{PhysRevLett.112.041302,LUX,PhysRevLett.109.181301,PhysRevLett.112.241302,CRESSTII}.} \label{fig:limitdmxs} \end{center} \end{figure} \section{Summary}\label{summary} A search has been presented for the production of dark matter particles in association with top quarks in single-lepton events with the CMS detector at the LHC, using proton-proton collision data recorded at $\sqrt{s}= 8$\TeV and corresponding to an integrated luminosity of 19.7\fbinv. No excess of events above the SM expectation is found and cross section upper limits on this process are set. Cross sections larger than 20 to 55\unit{fb} are excluded at 90\% CL for dark matter particles with the masses ranging from 1 to 1000\GeV. Interpreting the findings in the context of a scalar interaction between dark matter particles and top quarks in the framework of an effective field theory, lower limits on the interaction scale are set. Assuming a dark matter particle with a mass of 100\GeV, values of the interaction scale below 118\GeV are excluded at 90\% CL. These limits on the interaction scale are comparable to those obtained from a similar search by the ATLAS Collaboration~\cite{ATLAS_heavyQuarks}. In the case of an $s$-channel mediator, they are only valid for large values of the coupling constant, where the effective field theory approximation holds for most signal events. These limits are interpreted as limits on the dark matter-nucleon scattering cross sections for the spin-independent scalar operator. For dark matter particles with masses below 6\GeV, more stringent limits are obtained from this search than from direct dark matter detection searches. Dark matter-nucleon cross sections larger than 1--$2\times 10^{-42}\unit{cm}^{2}$ are excluded at 90\% CL. \begin{acknowledgments} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); MoER, ERC IUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NIH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); LAS (Lithuania); MOE and UM (Malaysia); CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS and RFBR (Russia); MESTD (Serbia); SEIDI and CPAN (Spain); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA). Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS programme of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund; the Compagnia di San Paolo (Torino); the Consorzio per la Fisica (Trieste); MIUR project 20108T4XTM (Italy); the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; and the National Priorities Research Program by Qatar National Research Fund. \end{acknowledgments}
{ "timestamp": "2015-08-04T02:20:14", "yymm": "1504", "arxiv_id": "1504.03198", "language": "en", "url": "https://arxiv.org/abs/1504.03198" }
\section{Introduction} Betavoltaic effect refers to the electric power production by a p-n junction bombarded by beta-particles that ionize the semiconductor material. Among the advantages of beta-batteries are their long service duration, amounting to years or even decades, and the possibility to use in the hard-to-reach areas. Betavoltaics and photovoltaics are related disciplines. In both cases, electric power results from the separation of electron-hole pairs produced by beta-electrons or photons by a p-n junction in the presence of a load in the circuit. In comparison to photovoltaics, publications in the field of the basic principles and applications of betavoltaic elements have been less numerous initially (see, e.g., Refs.~\cite{Rap54, Pfa54, Rap56, Fli64, Ols73, Ols74, Olstech}), but started to attract the attention of the researchers in the recent years \cite{And00, Bow02, Ada12, Ols12}. The main task in betavoltaic design is the choice of a beta-source/semiconductor combination, which should meet certain requirements. In particular, the beta-particles produced by the source must be absorbed efficiently by the semiconductor. Within the semiconductor, the diffusion length of the electron-hole pairs generated by the beta-flux should be large enough to allow them to reach the p-n junction with as little losses as possible. Because only the relatively low-energy beta-electrons are utilized effectively (with energies varying between 5 and 70 keV) for the realistic semiconductor thicknesses, three main beta-sources are presently employed in betavoltaic applications: Tritium \Tr, Nickel $^{63}$Ni, and Promethium \Pm. The respective mean energies of the electrons produced by these sources are 5.7, 18, and 62 keV. The efficiency, $\eta$, of a betavoltaic converter is proportional to the collection coefficient, $Q$, of the electron-hole pairs generated by the beta-flux. In Refs.~\cite{Pfa54, Olstech}, $Q$ was calculated under the assumption that the generation function of electron-hole pairs by a beta-flux $g(x) \propto \exp(-\alpha x)$. In reality, the generation function is close to zero within the so-called ``dead layer'' under the front surface, and exhibits a maximum at some distance $x_m$ from the surface \cite{Dmi78}. This implies that this exponential approximation is correct starting from some $x$-value greater than $x_m$. The emergence of the maximum in the $g(x)$ curve is due to the fact that, initially, the primary electrons pass through the semiconductor with only weak scattering. The dead layer thickness $x_m$ increases with the energy of the incident beta-electrons. For GaAs, $x_m$ is in the range 0.1 -- 1 $\mu$m \cite{Dmi78}. Although the works \cite{Pfa54, Olstech} do report analytical expressions for $Q$ (obtained under the assumption of the absence of the dead layer), the values of $Q = 1$ and 0.7 were used in the calculations of beta-conversion efficiency \cite{Olstech, Ols12}. While the value $Q = 1$ corresponds to the limiting conversion efficiency that is maximal in principle, the choice $Q = 0.7$ was not explained in \cite{Olstech, Ols12}. In this work, we derive an expression for $Q$ taking the dead layer into account, and also using the realistic values of the nonradiative Shockley-Reed-Hall (SRH) recombination lifetime, $\tau_{SR}$, for direct-bandgap semiconductors. In such materials, the values of $\tau_{SR}$ are usually short, and are in the range of $10^{-9}-10^{-7}$ s. We use the so obtained collection coefficient to derive the expression for the realistically attainable beta-conversion efficiency $\eta$ of various combinations of beta-sources and direct-bandgap semiconductors. When calculating the efficiency, we focus on GaAs as a typical example. We show that decreasing $\tau_{SR}$ and increasing the dead layer thickness leads to a strong reduction of $Q$ below 1, and to the corresponding reduction of the beta-conversion efficiency. \section{Analysis of the collection coefficient} We assume that the electron-hole pairs are generated only weakly within the dead layer, $x < x_m$, while for $x > x_m$, the generation function has the form $g(x) = I_0\,\exp(-\alpha (x-x_m))$, where $I_0$ is the electron-hole pair generation rate in the $x_m$-plane, and $\alpha^{-1}$ is the characteristic decay length. Furthermore, we assume that $d_p < x_m$ and $S_d \ll D/L$, $d_p$ being the junction depth, $S_d$ the recombination rate on the back surface of the base, and $L$ and $D$ the diffusion length and coefficient of the excess electron-hole pairs generated in the base region. The sketch of our structure is summarized in Fig.~\ref{fig1}. \begin{figure}[t!] \includegraphics[scale=0.4]{fig1.eps} \caption{Schematic illustration of a p-n junction of thickness $d = d_p + d_b$, where $d_p$ is emitter depth and $d_b$ is base thickness. The dead layer of thickness $x_m$ extends into the base region.} \label{fig1} \end{figure} Apart from the SRH mechanism with the lifiteme $\tau_{SR}$, the electron-hole pairs in GaAs also recombine radiatively; the characteristic time of this process is $\tau_r = (AN_d)^{-1}$, where $A$ is the radiative recombination coefficient, and $N_d$ is the base doping concentration. Therefore, the diffusion length can be written as \begin{equation} L = (D\tau_b)^{1/2}\ , \label{1} \end{equation} with $\tau_b = \left(\tau_{SR}^{-1} + \tau_r^{-1}\right)^{-1}$ being the effective lifiteme in the neutral base region. Continuity equation for the excess concentration of the electron-hole pairs, $\Delta p_1$, within the dead layer (i.e., for $x < x_m$, region 1), where generation is negligible, has the form \begin{equation} \frac{d^2\Delta p_1}{dx^2} - \frac{\Delta p_1}{L^2} = 0\ , \label{2} \end{equation} In the rest of the semiconductor ($x > x_m$, region 2), the continuity equation for the excess electron-hole pair density, $\Delta p_2$, is \begin{equation} \frac{d^2\Delta p_2}{dx^2} - \frac{\Delta p_2}{L^2} = -\frac{\alpha I_0\,e^{-\alpha (x-x_m)}}{D}\ . \label{3} \end{equation} The equations (\ref{2}) and (\ref{3}) are supplemented by the boundary conditions \begin{eqnarray} &&\Delta p_1(x = d_p) = 0\ ,\ \ \frac{d\Delta p_2}{dx}(x = d) = 0\ , \nonumber \\ &&\Delta p_1(x = x_m) = \Delta p_2(x = x_m)\ ,\nonumber \\ &&\frac{d\Delta p_1}{dx}(x = x_m) = \frac{d\Delta p_2}{dx}(x = x_m)\ . \label{4} \end{eqnarray} The first condition reflects the fact that the electron-hole pairs are separated at the junction depth. The second one indicates the absence of surface recombination at the back of the base. The remaining two expressions are the usual continuity conditions for $\Delta p(x)$ and $d\Delta p(x)/dx$ at $x = x_m$. The collection coefficient is then defined as the ratio of the current at the junction depth, $d_p$, to the pair generation rate in the plane of highest generation at $x = x_m$: \begin{equation} Q = \frac{D}{I_0}\frac{d\Delta p_1}{dx}(x = d_p)\ . \label{5} \end{equation} The solution of (\ref{2}) and (\ref{3}) that satisfies the first two conditions (\ref{4}) can be written as \begin{eqnarray} &&\Delta p_1(x) = C\sinh\frac{x - d_p}{L}\ ,\nonumber \\ &&\Delta p_2(x) = C'\cosh\frac{x - d}{L} \nonumber \\ &&\ \ \ \ \ \ \ \ \ \ + B\left(e^{-\alpha(x - x_m)} - \beta\,e^{-x/L}\right)\ ,\nonumber \\ &&B = \frac{\alpha\,I_0\,L^2}{D\left(1 - \alpha^2L^2\right)}\ ,\ \ \beta = \alpha L\exp{\left[\left(\frac{1}{L} - \alpha\right)d\right]} \end{eqnarray} with constants $C$, $C'$ to be determined from the remaining two conditions (\ref{4}). This procedure yields: \begin{eqnarray} &&Q = \alpha L\,\times \nonumber \\ &&\frac{\alpha L\left(\cosh\frac{d - x_m}{L} - e^{-\alpha(d - x_m)}\right) -\sinh\frac{d - x_m}{L}}{\left[(\alpha L)^2-1\right]\cosh\frac{d - d_p}{L}}\ . \label{8} \end{eqnarray} If $d - x_m \gg L$ and $\alpha(d - x_m) \gg 1$, this expression simplifies to \begin{equation} Q = \frac{\alpha L}{1 + \alpha L}e^{(d_p - x_m)/L}\ . \label{8a} \end{equation} \begin{figure}[t!] \includegraphics[scale=0.25]{fig2.eps} \caption{(a) Collection coefficient, $Q$, as a function of the diffusion length, $L$, for different absorption coefficients, $\alpha$, in the limit $\alpha (d - x_m) \gg 1$, $d - x_m \gg L$, see Eq.~(\ref{8a}). The values used, $\alpha = 10^5, 6\cdot 10^3$, and $6\cdot 10^2$\,cm$^{-1}$, approximately correspond to the respective mean beta-energies of $5.7, 20$, and $60$\,keV for GaAs-based p-n junction \cite{Tri67}. The dashed curves are calculated for different dead layer thicknesses, $x_m$, and $d_p = 10^{-5}$ cm. The solid curves are from the standard relation $Q = \alpha L/(1 + \alpha L)$, valid in the absence of the dead layer. (b) Collection coefficient (\ref{8a}) for different junction depth values for $x_m = 10^{-5}$\,cm and $\alpha = 10^5$\,cm$^{-1}$, corresponding to the beta-particle energy of about 5.7\,eV in the \Tr/GaAs combination.} \label{fig2} \end{figure} Fig.~\ref{fig2} shows the dependence of the collection coefficient $Q$ on the diffusion length from Eq.~(\ref{8a}). As seen in this figure, the strongest reduction of $Q$ due to the presence of the dead layer is for the case of the \Tr\ beta-source. The smallest discrepancy in the $Q$-values obtained with and without taking into account the dead layer is found for the curves corresponding to $\alpha = 6\cdot 10^2$\,cm$^{-1}$, realized in the case of the \Pm-source. In this case, to obtain $Q > 1/2$, one would need the diffusion length $L > 35\ \mu$m. The values $Q \approx 1$ can be achieved only in Si p-n junctions with long minority carrier lifetimes \cite{Gor00}. In Fig.~\ref{fig2}(b), the junction depth was varied at a fixed electron energy (and thus constant $\alpha$) and dead layer thickness. As seen in this figure, the collection coefficient increases not only upon increasing $L$, but also upon approaching the junction depth to the $x_m$-value. This effect is especially important for small diffusion length $L$. A further conclusion from Fig.~\ref{fig2} is that collection of the electron-hole pairs generated by the electron flux will be quite efficient in the case when the diffusion length exceeds the dead layer thickness, $L > x_m$. An alternative way to increase $Q$ is to use deeper junctions with $d_p \approx x_m$. Let us find the relation between the diffusion length and SHR lifetime $\tau_{SR}$ for the case of GaAs. The radiative recombination coefficient $A$ in GaAs is an effective parameter defined by the relation $A = A_0(1 - \gamma_r)$ \cite{Din11}, where $A_0 \approx 6\cdot 10^{-10}$ cm$^3$/s \cite{Sach14}, and $\gamma_r$ is the photon re-absorption coefficient. In our calculations, we assumed the value $A = 2\cdot 10^{-10}$\,cm$^3$/s, as can be derived for poorly reflecting GaAs-based plane-parallel p-n structures without multiple reflection using the approach from \cite{Din11}. In the work \cite{Sach14}, it was shown that for realistic lifetimes $\tau_{SR}$, the open-circuit voltage $V_{OC}$ of GaAs-based p-n junctions increases with the base doping level, $N_d$, and, taking into account the interband Auger recombination, it has a maximum at $N_d \approx 10^{17}$\,cm$^{-3}$. Let us first assume that the GaAs p-n junction base is of p-type, and the diffusion coefficient of electron-hole pairs is 50 cm$^2$/s. Then, for $A \approx 2\cdot 10^{-10}$\,cm$^3$/s, $N_d = 10^{17}$ cm$^{-3}$, and lifetimes $\tau_{SR} = 10^{-9}, 10^{-8}$, and $10^{-7}$ s, diffusion length $L$ has the respective values of 2.2, 6.45, and 12.9 $\mu$m. \begin{figure}[t!] \includegraphics[scale=0.25]{fig3.eps} \caption{Collection coefficient $Q$ of a \Tr/GaAs betavoltaic pair as a function of the junction depth for (a) p-type base and (b) n-type base.} \label{fig3} \end{figure} Fig.~\ref{fig3}(a) shows the dependence of the collection coefficient, $Q$, of a pair \Tr/GaAs as a function of the junction depth, $d_p$, for these three values of $\tau_{SR}$ at $x_m = 0.15\,\mu$m \cite{Dmi78} and junction thickness $d = 10\,\mu$m. As seen in this figure, $Q$ is close to 1 for $\Delta x = x_m - d_p < 0.1\,\mu$m. For $\Delta x > 0.1\,\mu$m, the $Q$-value decreases with $\Delta x$, but remains rather large. Presented in Fig.~\ref{fig3}(b) is the collection coefficient vs. $d_p$ for the case when the base region of the p-n junction is of the n-type. In this case, for $\tau_{SR} = 10^{-9}, 10^{-8}$, and $10^{-7}$\,s, and $A \approx 2\cdot 10^{-10}$\,cm$^3$/s and $N_d = 10^{17}$\,cm$^{-3}$, and taking into account that $D = 7$\,cm$^2$/s, the diffusion length $L = 0.83, 2.41$, and $4.83\,\mu$m, respectively. As seen in the figure, in this case $Q$ is also quite large. For $\tau_{SR} = 10^{-7}$ and $10^{-8}$\,s, $Q$ is still close to 1, while for $\tau_{SR} = 10^{-9}$\,s, $Q$ exceeds 0.75 even for small $\Delta x$. It should be noted that, because of rather strong absorption of the electrons emitted by the \Tr-source by the auxiliary layers of a betavoltaic element (such as protection coating or contact layers), additional reduction of the beta-generated current can take place, leading to the efficiency reduction. Let us now analyze the collection coefficient for the \Pm/GaAs pair. In this case, according to \cite{Tri67}, $\alpha \approx 600$\,cm$^{-1}$, i.e., excess electron-hole density decays much more slowly than in the \Tr/GaAs case. For this the inequality $\alpha L \gg 1$ is alway satisfied even for the shortest lifeteme of $10^{-9}$\,s. In contrast, for \Pm\ source, $\alpha L = 1.5$ for $L = 25\,\mu$m, while $\alpha L = 0.06$ for $L = 1\,\mu$m, so that $Q$ is always notably smaller than 1. But this is not the only reason for the reduction of $Q$ in realistic \Pm/GaAs structures. When manufacturing solar cells based on the direct-bandgap semiconductors, such as GaAs, full thicknesses of p-n junctions are chosen rather small (of the order of a few $\mu$m). Such structures were used in \cite{And00}. In contrast, for the \Pm/GaAs pair used in betavoltaics, the situation might be very different, especially for large values of $L$. In this case, the product $\alpha d$ will be small, so that for full absorption of beta-flux much thicker p-n junctions are required compared to those typically used in photovoltaics. \begin{figure}[t!] \includegraphics[scale=0.25]{fig4.EPS} \caption{Collection coefficient of a \Pm/GaAs betavoltaic element as a function of junction depth for different Shockley-Reed lifetimes and element thicknesses for the case of (a) p-type base and (b) n-type base. } \label{fig4} \end{figure} Shown in Fig.~\ref{fig4} is the collection coefficient as a function of $d_p$ for a \Pm/GaAs pair calculated for different lifetimes $\tau_{SR}$ and junction thicknesses $d$ of 10 and 100\,$\mu$m. In this case, according to \cite{Dmi78}, $x_m = 3\cdot 10^{-4}$\,cm$^3$/s. Panels (a) and (b) correspond to the cases of p- and n-base conduction types, respectively. As seen in the figure, rather high values of $Q \ge 0.4$ for the \Pm/GaAs pair can be achieved only for the junction thickness $d \approx 100\,\mu$m. Also, collection coefficient decreases dramatically as $\tau_{SR}$ decreases. It should be noted that similar results for the attainable $Q$ are expected for other direct-bandgap A$_3$B$_5$ semiconductors, in particular, the ones based on the three-component compounds. \section{Open-circuit voltage analysis} When estimating the limiting efficiency value \cite{Olstech, Ada12}, we used the Shockley-Queisser approach \cite{Sho61}, in which not only the current density, but also the open-circuit voltage, $V_{OC}$, is assumed to be maximal. Therefore, our next task is to calculate the open-circuit voltage, $V_{OC}$, with realistic values of $\tau_{SR}$. It is given by the standard expression \begin{equation} V_{OC} = \frac{k_BT}{q}\ln\frac{N_d\Delta p^*}{n_i^2}\ , \label{11} \end{equation} where $\Delta p^* = \Delta p(x = d_p + w)$ is the excess minority carrier density in the base at the boundary between the space-charge region and quasilinear region of thickness $w$, $N_d$ is the equilibrium density of the majority carriers in the quasineutral base region, and $n_i$ is the intrinsic charge carrier density. It is related to the effective densities of states in the conduction and valence bands, $N_c$ and $N_v$, as \begin{equation} n_i = \sqrt{N_cN_v}\exp\left(-\frac{E_g}{2k_BT}\right)\ . \label{17} \end{equation} We assume that both $d_p$ and $w$ are much smaller than the diffusion length $L$. This allows us to approximate \begin{equation} \Delta p(x = 0) \approx \Delta p^*\ . \end{equation} Such an approximation introduces a negligible error into $V_{OC}$ from Eq.~(\ref{11}) in view of its logarithmic dependence on $\Delta p^*$. We will assume that recombination dominates in the quasineutral base region and in the space-charge region. Then, $V_{OC}$ can be obtained using the approach from \cite{Sach14}. Taking into account the generation-recombination processes, we first write the continuity equation for the excess carrier density supplemented by the boundary conditions: \begin{eqnarray} &&\frac{d^2\Delta p}{dx^2} - \frac{\Delta p}{L^2} - r(x)\,\Delta p(x) + g(x) = 0\ , \nonumber \\ &&\frac{d\Delta p}{dx}(x = d) = 0\ ,\nonumber\\ && D\frac{d\Delta p}{dx}(x = 0) = S_0\,\Delta p^*\ , \label{9} \end{eqnarray} where the third term describes recombination processes in the space-charge region of the abrupt junction, and the last one corresponds to the beta-induced generation. The first boundary condition is consistent with our assumption $S_d \ll D/L$ from the beginning of the previous section, and the second one is responsible for recombination effects in the $x = d_p + w$ plane. Integration of the continuity equation results in the balance equation for the generation-recombination currents, according to which the current density for electronic excitation is proportional to the integral of the generation term, \begin{equation} J_\beta = q\,\int_0^d dx\,\frac{\Delta p(x)}{\tau_b} + q\left(S_0 + R_{SC}\right)\,\Delta p^*\ , \label{10} \end{equation} where $q$ is the elementary charge. The right-hand side in (\ref{10}) is responsible for the recombination in the bulk and on the front side of the emitter and within the space-charge region. The space-charge region recombination rate is given by \cite{Sze} \begin{eqnarray} &&R_{SC}(\Delta p^*) = \frac{L_D}{\sqrt{2}\tau_{SR}}\int_{y_{pn}}^{-1} dy\,N_d\,\left(1 - y + e^y\right)^{-1/2}\times \nonumber\\ &&\Big[N_d e^y + n_i e^{E_r/k_BT} + b\left(\frac{n_i^2}{N_d} + \Delta p^*\right)e^{-y} \nonumber\\ &&\ \ \ \ \ \ \ + b n_i e^{-E_r/k_BT}\Big]^{-1}\ , \nonumber \end{eqnarray} where $b = \sigma_p/\sigma_n$ is the ratio of the capture cross-sections of holes and electrons by a recombination level, $E_r$ is the recombination level energy measured from the middle of the bandgap, $y_{pn}$ is the dimensionless potential at the p-n boundary, $L_D$ is the Debye length. To evaluate the first integral in (\ref{10}), we have employed the following approximative procedure. First, we write the solution of the continuity equation (\ref{9}) as a sum of homogeneous and inhomogeneous parts, \begin{equation} \Delta p(x) = \frac{e^{-x/L} + e^{(x-2d)/L}}{1 + e^{-2d/L}}\Delta p^* + \Delta p_i(x)\ , \end{equation} where the homogeneous term satisfies the first boundary condition in (\ref{9}) and gives the value $\Delta p(x = 0) = \Delta p^*$. The inhomogeneous contribution $\Delta p_i(x)$, with $\Delta p_i(x = 0) = 0$, is notably different from zero only within a relatively thin layer below the front surface of the emitter, where the generation-recombination processes take place. Therefore, the contribution to the integral of the second term can be neglected in comparison to the integral of the homogeneous term, allowing us to write \begin{equation} \int_0^d dx\Delta p(x) \approx \Delta p^*L\tanh(d/L)\ . \end{equation} This approximation should produce a negligible error in $V_{OC}$ in view of its logarithmic dependence on $\Delta p^*$. Substitution of this result into Eq.~(\ref{11}) taking into account that $L^2 = D\tau_b$ yields \begin{equation} J_\beta = q\Delta p^*\left[\frac{D}{L}\tanh\left(\frac{d}{L}\right) +S_0 + R_{SC}(\Delta p^*)\right]\ . \label{14} \end{equation} The current density $J_\beta$ is inversely proportional to the energy required to create one electron-hole pair, $\varepsilon$, which is approximately related to the bandgap $E_g$ as \cite{Klein68} \begin{equation} \varepsilon = 2.8\,E_g + 0.5\,\text{eV}\ . \label{10a} \end{equation} Denoting is the current density in the case of Si ($E_g = 1.12$\,eV) by $J_0$, the current density in the case of arbitrary bandgap can be approximated as \begin{equation} J_\beta =J_0\,Q\cdot 3.64\,\text{eV}/\varepsilon\ . \end{equation} We note that, usually, $J_0$ is in the $1$ -- $10^2\,\mu$A/cm$^2$ range \cite{Olstech}. The value of $\Delta p^*$ found from Eq.~(\ref{14}) should be substituted into Eq.~(\ref{11}) to obtain the open-circuit voltage $V_{OC}$. \begin{figure}[t!] \includegraphics[scale=0.25]{fig5.EPS} \caption{Open-circuit voltage as a function of the base doping level for different Shockley-Reed lifetimes in the case of (a) p-type base and (b) n-type base for $E_g$ = 1.43\,eV, $T = 300$\,K, and $J_0 = 10$\,$\mu$A/cm$^2$.} \label{fig5} \end{figure} Fig.~\ref{fig5} shows the dependence of $V_{OC}$ of a GaAs-based p-n junction on the base doping level, $N_d$, neglecting the surface recombination, that is, $S_0 \approx 0$. As seen in Fig.~\ref{fig5}, $V_{OC}$ increases with $N_d$. On the one hand, the values of $V_{OC}$ for the pair \Tr/GaAs is notably smaller than in the solar cells \cite{Sach14}, because the beta-produced current densities are at least two order of magnitude smaller than the short-circuit current densities in photovoltaic cells. On the other hand, the open-circuit voltages in Fig.~\ref{fig5} exceed the values obtained experimentally in \cite{And00}. The reason is that, in \cite{And00}, the current density $J_0$ was of the order of $1\,\mu$A/cm$^2$, whereas in our calculations, we have taken $J_0 = 10\,\mu$A/cm$^2$. If the values $J_0 = 1\,\mu$A/cm$^2$, $N_d = 5\cdot 10^{16}$\,cm$^{-3}$, and $\tau_{SR} = 10^{-9}$\,s are used, we obtain $V_{OC} = 0.44$\,V, which practically coincides with the value given in \cite{And00}. \section{Refined calculation of the limiting betaconversion efficiency} According to Olsen \cite{Olstech}, the efficiency of a betavoltaic element, $\eta$, is \begin{equation} \eta = \eta_\beta\,\eta_C\,\eta_S\ , \end{equation} where \begin{equation} \eta_\beta = N_\beta/N_0 \end{equation} is the fraction of beta-flux that reaches the semiconductor, \begin{equation} \eta_C = (1 - r)\,Q \end{equation} is the coupling efficiency, given by the product of absorption probability of a beta-particle ($r$ is the electron reflection coefficient from the semiconductor surface) and collection efficiency $Q$ of electron-hole pairs, and, finally, the semiconductor efficiency \begin{equation} \eta_S = q\,V_{OC}\,FF/\varepsilon\ , \end{equation} where $q$ is the elementary charge, $V_{OC}$ is the open-circuit voltage, $FF$ is the fill factor, $\varepsilon$ is the energy necessary to generate one electron-hole pair from Eq.~(\ref{10a}). Let us obtain $V_{OC}$ within the Shockley-Queisser approximation, where $\tau_{SR} \to \infty$, $S_0 and R_{SC} \to 0$, and the only recombination mechanism present is radiative recombination, characterised by the coefficient $A$. In this case, $V_{OC}^{lim}$ can be found analytically from (\ref{10}) and (\ref{14}): \begin{equation} V_{OC}^{lim} = \frac{k_BT}{q}\ln\frac{J_\beta}{qAdn_i^2}\ , \\ \label{16} \end{equation} The fill factor can be found using the expression from \cite{Olstech} \begin{equation} FF = \left[v_{OC} - \ln(v_{OC} + 0.72)\right]/(v_{OC} + 1)\ , \label{18} \end{equation} where $v_{OC} = V_{OC}/k_BT$. To calculate the limiting beta-conversion efficiency, we take $Q = 1$, $r = 0$, $\eta_\beta = 1$, corresponding to the bidirecional source in the terminology of \cite{Olstech}. In this case \begin{equation} \eta_{lim} = \frac{q\,V_{OC}^{lim}\,FF_{lim}}{2.8\,E_g + 0.5}\ , \label{19} \end{equation} where $V_{OC}^{lim}$ is given by (\ref{16}). When calculating $\eta_{lim}$, several issues may arise. First, the parameters $N_c$, $N_v$, and $A$ are material-specific in every semiconductor. Second, when evaluating $V_{OC}^{lim}$ and $FF_{lim}$, Olsen had used, for each source, concrete current density $J_0$ of the order of $10^2\,\mu$A/cm$^2$ for \Pm\ and $1\,\mu$A/cm$^2$ for \Tr. Finally, $V_{OC}^{lim}$ depends on the p-n junction thickness $d$. Therefore, all parameters in (\ref{19}) must be specified. Since such key parameters as $A$, $N_c$, and $N_v$ are known only for concrete semiconductors and concrete bandgap values $E_g$, in the best-case scenario, the dependence $\eta_{lim}(E_g)$ can be found as a set of support points for the known semiconductors with different $E_g$. Fitting this with a smooth curve might not be accurate enough. In this work, we calculated $\eta_{lim}$ only for the case of GaAs using Eq.~(\ref{19}). For $A = 2\cdot 10^{-10}\,cm^3$/s and $d = 10\,\mu$m gives for $J_0 = 10^2\,\mu$A/cm$^2$ the value $\eta_{lim} \approx 17$\,\%, and for $J_0 = 1\,\mu$A/cm$^2$, $\eta_{lim} \approx 14$\,\%. Note that the values of $\eta_{lim}$ obtained here notably exceed the ones obtained by Olsen in \cite{Olstech, Ols12}. In the rest of this work, we will use the values obtained for the \Pm/GaAs and \Tr/GaAs combinations, respectively. \begin{figure}[t!] \includegraphics[scale=0.25]{fig6.EPS} \caption{Beta-conversion efficiency of a \Tr/GaAs pair as a function of junction depth for different Schokley-Reed lifetimes for the case of (a) p-type base and (b) n-type base.} \label{fig6} \end{figure} \begin{figure}[t!] \includegraphics[scale=0.25]{fig7.EPS} \caption{Beta-conversion efficiency of a \Pm/GaAs element vs. junction depth for different Schokley-Reed lifetimes and element thcknesses for (a) p-type base and (b) n-type base.} \label{fig7} \end{figure} \section{Calculation of the attainable betaconversion efficiency} Fig.~\ref{fig6} shows the attainable efficiency as a function of $d_p$ for the \Tr/GaAs combination, obtained from \begin{equation} \eta = \eta_{lim}Q\frac{V_{OC}}{V^{lim}_{OC}}\ , \label{20} \end{equation} where $\eta_{lim} \approx 14$\,\%, $Q$ is given by Eq.~(\ref{8}), $V_{OC}$ is found from Eq.~(\ref{11}), and $V_{OC}^{lim}$ from Eq.~(\ref{16}). When plotting Fig.~\ref{fig6}, we varied the lifetime at a constant $d = 10\,\mu$m. Panels (a) and (b) correspond to the base of the p- and n-type, respectively. As seen in Fig.~\ref{fig6}, the attainable efficiency values are rather high and are in the range of (6.4 - 12.5)\%. It should be noted that our results for \Tr/GaAs pair agree well with those given in the review \cite{Ols12} citing Refs.~\cite{And00, Bow02, Ada12}, namely, $\eta =$ (4 - 7) \%. In these works, a \Tr-source was used with the $A_3B_5$-based semiconductors. But, as evident from the figures shown, the possibilities of increasing the efficiency of \Tr/$A_3B_5$ betaconversion are far from being exhausted. Shown in Fig.~\ref{fig7} is the attainable beta-efficiency (\ref{20}) as a function of $d_p$ for \Pm/GaAs pair with $\eta_{lim}$ = 17\,\%. The $\tau_{SR}$ values used were $10^{-9}, 10^{-8}$, and $10^{-7}$\,s, and GaAs thicknesses were 10 and 100 $\mu$m. Fig.~\ref{fig7}(a) and (b) correspond to the p- and n-types of the base conductivity. As seen in this figure, $\eta$ reduces rather strongly as $\tau_{SR}$ is decreased. For the highest $\tau_{SR} = 10^{-7}$\,s, $\eta$ decreases with decreasing $d$. The highest efficiency attainable, $\eta = 7.25$\,\%, is achieved for $\tau_{SR} = 10^{-7}$ s and $d = 100\,\mu$m, and the lowest value of $0.51$ \% is realized for $\tau_{SR} = 10^{-9}$ s and $d = 10\,\mu$m. Thus, we conclude that a \Pm/GaAs-based betaconverter is not as efficient as a \Tr/GaAs-based one. Perhaps, the very small efficiency of the \Pm/GaAs battery obtained in \cite{Fli64} is due to the small thickness of GaAs and small lifiteme $\tau_{SR}$. The same applies also to the cases when, instead of GaAs, other direct-bandgap semiconductors are used. \section{Conclusions} Our analysis, focusing on the attainable collection coefficient $Q$ and open-circuit voltage values $V_{OC}$, has revealed the following features of current collection of the GaAs-based beta-elements. Efficient collection of the electron-hole pairs generated by a beta-flux can be achieved when the diffusion length exceeds the dead layer thickness, $L > x_m$. An alternative way to increase collection coefficient is to use deep junctions, for which $d_p \simeq x_m$. Additional mechanisms responsible for the reduction of current generated by beta-electrons are possible, leading to smaller betaconversion efficiency. They may be due, for instance, to the strong absorption of the beta-electrons by auxiliary layers of a betavoltaic element. Using the Shockley-Queisser approximation, we have derived the limiting betaconversion efficiency, $\eta_{lim}(E_g)$. Our analysis has shown that, because the main parameters affecting the efficiency are very different for different semiconductors, the $\eta_{lim}(E_g)$ curve can be build as a set of support points for semiconductors with different bandgaps, and not as a smooth curve. \Pm\ beta-source performs more poorly than \Tr-source, because the electron-hole pair generation depth in the case of \Pm-source is large, whereas the diffusion length of GaAs is small. Therefore, the majority of electron-hole pairs generated in the base recombine before reaching the p-n junction. In the case of \Tr-source, the picture is different. The collection coefficient is rather high, because of the small generation depth of electron-hole pairs. Therefore, the realistic betaconversion efficiency for the \Tr/GaAs pair will be rather high for relevant parameters (lifitemes and diffusion coefficients) of the semiconductor. Similar results are expected also in the case, when other direct-bandgap semiconductors are used instead of GaAs. \acknowledgments M.E. would like to thank Natural Sciences and Engineering Research Council of Canada (NSERC) for financial support.
{ "timestamp": "2015-04-14T02:14:34", "yymm": "1504", "arxiv_id": "1504.03179", "language": "en", "url": "https://arxiv.org/abs/1504.03179" }
\section{Introduction and background} \label{1} Let $G= G(V, E)$ be a simple graph. For any vertex $v\in V(G)$, $d(v)$ denotes the degree of $v$, $N(v)$ denotes the set of neighbors of $v$, and $N[v]$ denotes the closed neighborhood, i.e. $N[v]:=N(v)\cup \{v\}$. A subset $I\subseteq V(G)$ is called \textit{independent} if it does not induce any edge. A \textit{maximal independent set} is an independent set which is not a proper subset of another independent set (it cannot be extended). A maximum independent set is an independent set of maximal size; its size is denoted by $\alpha(G)$. A subset $D\subseteq V(G)$ is a \textit{dominating} set in $G$ if each vertex in $V(G)\setminus D$ is adjacent to at least one vertex of $D$, that is, $\forall v\in V(G)\setminus D$, $|N(v)\cap D|\geq 1$. We call a set \textit{$k$-dominating} if each vertex in $V(G)\setminus D$ is adjacent to at least $k$ vertices of $D$, that is, $\forall v\in V(G)\setminus D$, $|N(v)\cap D|\geq k$ . The theory of independent sets and dominating sets has been studied extensively over the last 60 years. Following the concept of W\l{}och \cite{wloch}, we study \textit{$k$-dominating independent sets}, or $k$-DISes for brevity, in case $k>1$. Note that the case $k=1$ when a set $W$ is dominating and independent at the same time is also extensively studied. These sets are called \textit{kernels} of the graphs (due to Neumann and Morgenstern) and they clearly coincide with the maximal independent sets. The possible number of kernels has been resolved in many graph families including connected graphs, bipartite graphs and trees, triangle-free graphs, see the results of Moon, Moser, F\"uredi, Hujter and Tuza, Jou and Chang \cite{chang1, chang2, furedi, HT, moon}. \medskip Our principal function is formulated in the following \begin{nota} Let $\hbox{\rm mi}_k(n)$ denote the maximum number of $k$-DISes in graphs of order $n$, and let $\hbox{\rm mi}_k(n, \mathcal{F})$ denote the maximum number of $k$-DISes in the $n$-vertex members of the graph family $\mathcal{F}$. If $\mathcal{F}$ consists of a single graph $G$, we denote by $\hbox{\rm mi}_k(G)$ the number of $k$-DISes in $G$. \end{nota} Concerning graph constructions, we will use \begin{nota} For arbitrary graphs $G$ and $H$, $G+H$ denotes the disjoint union of $G$ and $H$. Similarly, if a parameter $k\in \mathbb{Z}^+$ is given, $kG$ denotes the disjoint union of $k$ copies of $G$. $K_m\square K_m$ denotes the Cartesian product of two $K_m$ graphs, or in other words it is the strongly regular Lattice graph $L(m)$, or Rook graph. Finally, $(K_m)^t$ denotes the Cartesian product of $t$ $ K_m$ graphs: $K_m\square K_m\square \ldots \square K_m$. \end{nota} \begin{obs}\label{obsit} $\hbox{\rm mi}_k(G+H)=\hbox{\rm mi}_k(G)\cdot\hbox{\rm mi}_k(H)$ for any two graphs $G$ and $H$. \end{obs} \begin{nota} Let $\zeta_k(G):=\sqrt[n]{\hbox{\rm mi}_k(G)}$ for a fixed graph $G$ on $n$ vertices and let $$\zeta_k(n):=\sqrt[n]{\hbox{\rm mi}_k(n)}, \ \ \zeta_k(n, \mathcal{F}):=\sqrt[n]{\hbox{\rm mi}_k(n, \mathcal{F})}.$$ \end{nota} \begin{theorem}\label{alap0} \begin{itemize} \item[(i)] $\zeta_k(n) \in [1,2] \ \ \forall k,n \in \mathbb{Z}^+, k\leq n.$ \item[(ii)] $\zeta_k(G) \leq \lim \inf \zeta_k(n) \ \ \forall k \in \mathbb{Z}^+$ and for every fixed graph $G$. \item[(iii)] $\forall k \ \ \exists \lim \zeta_k(n)$. \end{itemize} \end{theorem} \begin{proof} Part $(i)$ is straightforward since $1\leq\hbox{\rm mi}_k(n)\leq 2^n$ in view of the empty graph and the number of all possible subsets of the vertex set.\\ Suppose $\zeta_k(G)\geq 1$. If we apply Observation \ref{obsit} to $\left\lfloor \frac{n}{|V(G)|}\right\rfloor$ disjoint copies of $G$ and suitable number of additional isolated vertices, we get $\hbox{\rm mi}_k(n)\geq \hbox{\rm mi}_k(G)^{\left\lfloor\frac{n}{|V(G)|}\right\rfloor}$, hence part $(ii)$ follows.\\ Finally, part $(i)$ and part $(ii)$ together implies part $(iii)$. \end{proof} Our main theorems are \begin{theorem} \label{fo1} The order of magnitude of the maximum number of $2$-DISes is bounded as follows. $$1.22<\sqrt[9]{6}\leq \lim \zeta_2(n) \leq \sqrt[5]{3}<1.2457.$$ \end{theorem} \begin{theorem} \label{fo2} For every $k>2$, $$\sqrt[2k]{2}\leq \lim\zeta_k(n) \leq \sqrt[k+1]{2}.$$ \end{theorem} The paper is built up as follows. Section 2 summarizes the main known results on the number of $k$-DISes for $k=1$. In Section 3, we give a simple characterization of graphs which contain a $k$-dominating independent set and point out the existence of large graph families not containing $2$-DISes. Next we prove that if a $k$-DIS exists in a tree, then it is unique. Furthermore we present an efficient algorithm which provides a $k$-DIS in a given tree or proves the non-existence of such a set. Finally, we present graph constructions containing many $k$-DISes. Proposition \ref{expect} essentially states that a random graph contains a huge number of $k$-DISes for any fixed $k$. We prove the lower bounds of Theorem \ref{fo1} and Theorem \ref{fo2} in Section 4. These bounds are based on constructions. The presented graphs providing the lower bound on $ \zeta_k(n)$ are of different structure in the cases $k=1, 2, 3$ and $k\geq 4$. One of them leads to the determination of the number of ternary $(n, M, 2)_3$ MDS codes as well. Extremal constructions are often obtained from finite geometry. (For detailed descriptions we refer to \cite{FS}.) In our case, specific examples for different types of graphs with many $2$-dominating ($k$-dominating) independent sets are given based on hyperovals and generalized $\{k;n\}$-arcs. Section 5 is devoted to the upper bound part of Theorem \ref{fo1} and Theorem \ref{fo2}. At last, some open questions and concluding remarks are collected in Section 6. \bigskip \section{Results on the number of maximal independent sets: $1$-DISes} \label{2} Erd\H os and Moser raised the question to determine the maximum number of maximal cliques in $n$-vertex graphs. Note that it is the same as the maximum number of maximal independent sets (that is, $1$-DISes) an $n$-vertex graph can have. Answering a question of Erd\H os and Moser, Moon and Moser proved the following well known \begin{theorem}[Moon-Moser, \cite{moon}]\label{alap} The following equality holds: $$\hbox{\rm mi}_1(n)= \left\{ \begin{array}{lll} 3^{n/3} & \textrm{if } n\equiv 0 \pmod 3 \\ \frac{4}{3}\cdot3^{ \lfloor n/3 \rfloor} & \textrm{if } n\equiv 1 \pmod 3\\ 2\cdot3^{ \lfloor n/3 \rfloor} & \textrm{if } n\equiv 2 \pmod 3 \end{array} \right.$$ \end{theorem} \noindent Moreover, they proved that the equality is attained if and only if the graph $G$ is isomorphic to the graph $\frac{n}{3} K_3$ (if $n\equiv 0 \pmod 3$); to one of the graphs $(\lfloor \frac{n}{3} \rfloor-1)K_3 + K_4$ or $(\lfloor \frac{n}{3} \rfloor-1)K_3 + 2K_2$ (if $n\equiv 1 \pmod 3$); to $\lfloor n/3 \rfloor K_3 + K_2$ (if $n\equiv 2 \pmod 3$). \begin{corollary} $\lim \zeta_1(n)= \sqrt[3]{3}$, and $\lim \zeta_k(n) \in [1, \sqrt[3]{3}]$ for all $k>1$. \end{corollary} \noindent For connected graphs the question was raised by Wilf \cite{wilf}, and the answer is fairly similar. \begin{theorem}[F\"uredi \cite{furedi}, Griggs, Grinstead, Guichard \cite{griggs}] Let $\mathcal{F}_{con}$ be the family of connected graphs. Then $$\hbox{\rm mi}_1(n, \mathcal{F}_{con})= \left\{ \begin{array}{lll} \frac{2}{3}\cdot3^{n/3}+\frac{1}{2}\cdot2^{n/3} & \textrm{if } n\equiv 0 \pmod 3 \\ 3^{ \lfloor n/3 \rfloor}+ \frac{1}{2}\cdot2^{ \lfloor n/3 \rfloor} & \textrm{if } n\equiv 1 \pmod 3\\ \frac{4}{3}\cdot3^{ \lfloor n/3 \rfloor}+\frac{3}{4}\cdot2^{ \lfloor n/3 \rfloor} & \textrm{if } n\equiv 2 \pmod 3 \end{array} \right.$$ \end{theorem} The extremal graphs are determined as well. In these graphs, there is a vertex of maximum degree, and its removal yields a member of the extremal graphs list of Theorem \ref{alap}. \medskip \noindent Wilf, and later Sagan studied the family of trees. \begin{theorem}[\cite{wilf, sagan}]\label{tree} Let $\mathcal{T}$ be the family of trees. Then the following equality holds: $$\hbox{\rm mi}_1(n, \mathcal{T})= \left\{ \begin{array}{ll} \frac{1}{2}2^{n/2}+1 & \textrm{if } n\equiv 0 \pmod 2 \\ 2^{ \lfloor n/2 \rfloor} & \textrm{if } n\equiv 1 \pmod 2 \end{array} \right.$$ The extremal trees can be classified. \end{theorem} \begin{corollary} $\lim \zeta_1(n, \mathcal{T})= \sqrt{2}$. \end{corollary} \begin{theorem}[Hujter, Tuza \cite{HT}] Every triangle-free graph on $n \geq 4$ vertices has at most $2^{n/2} $ or $5 \cdot 2^{( n - 5 )/2} $ maximal independent sets, whether $n$ is even or odd. In each case, the extremal graph is unique. \end{theorem} \medskip \section{$k$-DISes --- existence and characterizations } While kernels ($1$-DISes) obviously exist in every graph, this is far from being true for $k$-DISes for a fixed $k>1$. To illustrate this phenomenon, consider \begin{proposition} Let $G$ be (i) a complete graph, (ii) an odd cycle, (iii) the complement of a connected triangle-free graph with at least $2$ edges, w.r.t. $K_n$. Then $G$ does not contain a $k$-dominating independent set for $k>1$. \end{proposition} \begin{proof} It is straightforward to check the statement for (i) and (ii). If $G$ is the complement of a connected triangle-free graph, then an independent set consists of at most $2$ vertices in $G$, however no pair of vertices are both adjacent to every other vertex in the graph. \end{proof} Hence the question naturally arises whether to contain (many) $k$-dominating sets is rather a rare property for $k>1$. Consider the Erd\H os-R\'enyi random graph $G_{n,p}$. Let $X_{t,1}$ denote the random variable which counts the number of maximal independent sets of size $t$ in $G_{n,p}$ and $X_{t,k}$ denote random variable which counts the number of $k$-DISes of size $t$ in $G_{n,p}$. \\ Following the idea of Bollob\'as and Erd\H os on maximal cliques \cite{EB}, one can easily calculate the expected value of $X_{t,1}$, $X_{t,2}$ or generally of $X_{t,k}$ as well. Note that the expected value for $X_{t,1}$ is well known, we only add here for the purpose of comparison. \begin{proposition}\label{expect} $\mathbb{E}(X_{t,1})=\binom{n}{t}(1-p)^{\binom{t}{2}}\left(1-(1-p)^t\right)^{n-t},$ \hspace{0.6cm} $\mathbb{E}(X_{t,2})=\binom{n}{t}(1-p)^{\binom{t}{2}}\left(1-(1-p)^t-tp(1-p)^{t-1}\right)^{n-t}$, \hspace{0.6cm} $\mathbb{E}(X_{t,k})=\binom{n}{t}(1-p)^{\binom{t}{2}}\left(1-(1-p)^t-tp(1-p)^{t-1}- \cdots - \binom{t}{k-1}p^{k-1}(1-p)^{t-k+1} \right)^{n-t}$. \end{proposition} \begin{corollary} \label{kov} Let $p=1/2$. If $t<\log_2{n}-2\log\log n$ or $t >2\log_2{n}$, then $\mathbb{E}(X_{t,1})<1/n$ and so $\mathbb{E}(X_{t,k})<1/n$ for all $k$. If $t=c\log_{2}n$ with constant $1<c<2$, then $\mathbb{E}(X_{t,k})= n^{\Omega( c(1-\frac{c}{2})\log_{2}n)}$ for all $k$. \end{corollary} Next, we present an easy constructive method to gain graphs with $k$-DISes. \begin{construction}\label{konst} Let $\Sigma=\{S_{t_i}\}$ be a set of disjoint stars with centers $v_i$ such that $t_i\geq k$. Then the following two operations are allowed: \begin{itemize} \item[i] Identification of $x\in N(v_i)$ and $y\in N(v_j)$ in the $i$th and $j$th star for a pair $(i,j)$, \item[ii] Addition of an edge between $v_i$ and $v_j$ for a pair $(i,j)$. \end{itemize} \end{construction} \begin{claim} Every graph $G$ which contains a $k$-dominating independent set can be obtained by Construction \ref{konst}. \end{claim} \begin{proof} Consider a $k$-dominating independent set $D$ in $G$, and let $D':=V(G)\setminus D$. Delete the edges of $G\mid_{D'}$. In the resulting graph, the degree $d(v)$ of every $v\in D'$ is at least $k$, while $N(D')=D$ is an independent set. The claim thus follows. \end{proof} For the family of trees, we saw in Section 2. that the number of $1$-DISes can be exponential in the number of vertices via Theorem \ref{tree}. If $k>1$, the situation is completely different from the case $k=1$. Confirming an extended version of a conjecture of Pawe{\l} Bednarz \cite{pawel} on $k$-DISes of trees, we can formulate the following \begin{theorem}\label{fa} Let $k>1$. If $G$ is a tree (or forest) and there exists a $k$-dominating independent set in $G$, then it is unique. That is, $\hbox{\rm mi}_k(n, \mathcal{T})=1$. \end{theorem} \begin{proof} Assume to the contrary that there exists a forest $T$ with (at least) two different $k$-dominating sets $D_1$ and $D_2$, moreover $T$ is a minimal counterexample regarding the number of vertices and edges. We introduce the notions $L_T$ for the set of leaves in $T$, $Q_T:=N(L_T)$ the neighbors of the leaves and $R_T:=V\setminus (L_T \cup Q_T)$ the rest of the vertices. The minimality condition immediately implies that $T$ is a tree. Furthermore $L_T \subseteq D_i$ from the $k$-domination and $Q_T \cap D_i = \emptyset$ from the independence of the sets $D_i$. Consequently, the graph spanned by $R_T$ has at least two different $k$-dominating sets and they can be extended in the same way to $Q_T$ and $L_T$, a contradiction. Finally, observe that the leaves of a star $S_{k+1}$ form a $k$-dominating independent set in $S_k$. \end{proof} An alternative way to see this is a consequence of the following simple Algorithm \ref{alga}, which either finds the unique $k$-dominating set in the tree, or proves that there does not exist any. \begin{algo}\label{alga} Let $D$ and $D'$ be empty sets in the beginning. $\bullet$ Put all the isolated vertices to $D$. Cluster the vertices of the forest $T$ to $L_T, Q_T, R_T$ as in the proof of Theorem \ref{fa}.\\ $\bullet$ If $|Q_T|=0$ but $|L_T|>1$, stop with the answer 'no $k$-dominating independent set'.\\ $\bullet$ If $|Q_T|=0$ and $|L_T|=1$, put $w\in L_T$ to $D$ and stop with the answer '$D$ is the $k$-dominating independent set'.\\ $\bullet$ If $|Q_T|=|L_T|=0$, stop with the answer '$D$ is the $k$-dominating independent set'.\\ $\bullet$ Else choose a vertex $q$ from $Q_T$ which has at most $1$ neighbor from $R_T \cup Q_T$. Note that such a vertex clearly exists since the graph $G \setminus L_T$ is a tree, whose leaf set is a subset of $Q_T$.\\ $\bullet$ $\bullet$ If $|N(q)\cap L_T|\geq k$, then put $q$ to $D'$ and the vertices of $N(q)\cap L_T$ to $D$, finally delete \mbox{$\{q\}\cup (N(q)\cap L_T)$} from $T$.\\ $\bullet$ $\bullet$ If $|N(q)\cap L_T|=k-1$ and $|N(q)\cap(R_T \cup Q_T)|=1$, then put $q$ to $D'$ and the vertices of $N(q)\cap L_T$ to $D$, delete \mbox{$\{q\}\cup (N(q)\cap L_T)$} from $T$, and separate the edges which are adjacent to the vertex $N(q)\cap(R_T \cup Q_T)$ in the remaining graph with copies of $N(q)\cap(R_T \cup Q_T)$ as endvertices.\\ $\bullet$ $\bullet$ Else, stop with the answer 'no $k$-dominating independent set'.\\ Iterate. \end{algo} It is easy to see that if a vertex is duplicated, then the copies will be leaves in the remaining graph hence the vertex will be part of $D$ if there exists a suitable $k$-dominating independent set. It is also clear that $D$ will be an independent set throughout the algorithm. At the same time every vertex in $D'$ will have at least $k$ neighbors from $D$. Indeed, if a vertex $q$ is put into $D'$ when $|N(q)\cap L_T|=k-1$, then it is guaranteed that its last neighbor will be in $D$ as well. Thus $D$ will be a $k$-dominating independent set. Finally, the algorithm stops with 'no' answer exactly when there is an evidence for the non-existence. \medskip \section{$k$-DISes --- constructions and lower bounds } In this section we prove the lower bound of Theorem \ref{fo1} and Theorem \ref{fo2} by showing suitable graphs. Let $G$ be a complete bipartite graph of equal cluster size, or a Tur\'an graph $T_{p^2,p}$ on $p^2$ vertices and $p$ equal partition classes. \begin{proposition}\label{constr} $\zeta_k(K_{t,t})=\sqrt[2t]{2}$ if $k\leq t$. $\zeta_k(T_{p^2,p})=\sqrt[p^2]{p}$ if $k\leq p$. \end{proposition} Putting together the first statement of Proposition \ref{constr} with $k=t$ and Theorem \ref{alap0} we get the lower bound of Theorem \ref{fo2} : $\zeta_k(K_{k,k})=\sqrt[2k]{2} \leq \lim \zeta_k(n)$. \\ Note that for $k=3$, the Tur\'an graph provides better estimation from Proposition \ref{constr}: $\zeta_k(T_{3\cdot 3,3})=\sqrt[9]{3} \leq \lim \zeta_3(n)$. Here $\sqrt[9]{3}\approx 1.13$ while the bipartite graph $K_{3,3}$ would yield only $\sqrt[6]{2}\approx 1.122$. For $k=4$, $\zeta_4(T_{4\cdot 4,4})= \zeta_4(K_{4,4})$. \medskip \noindent Kneser graphs also provide many $k$-DISes: \begin{proposition}\label{Kneser} Let $G=KN(n,t)$ denote the Kneser graph whose vertices correspond to the $t$-element subsets of a set of $n$ elements, and where two vertices are adjacent if and only if the two corresponding sets are disjoint. Suppose $t< n/2$. Then $G$ contains $n$ $k$-DISes for $k=\binom{n-t-1}{t-1}$. \end{proposition} \begin{proof} Clearly the largest independent set in $G$ is of size $\binom{n-1}{t-1}$ according to the theorem of Erd\H os, Ko and Rado \cite{EKR}, and the corresponding $t$-element subsets are those which contain a fixed element $i\in \{1,2,\ldots, n\}$. Thus the proposition indeed follows since a $t$-element subset which does not contain $i$ are disjoint to exactly $k=\binom{n-t-1}{t-1}$ $t$-subsets which contain $i$, while less vertices in a maximal independent set do not provide enough edges to $k$-dominate the rest of the vertices. \end{proof} Now we turn our attention to the case $k=2$. \begin{claim} $\hbox{\rm mi}_2(3)=1$, $\hbox{\rm mi}_2(4)=2$, $\hbox{\rm mi}_2(5)=2$, $\hbox{\rm mi}_2(6)=3$, $\hbox{\rm mi}_2(7)=3$, $\hbox{\rm mi}_2(8)=4$, $\hbox{\rm mi}_2(9)=6$, $\hbox{\rm mi}_2(16)\geq 24$. \end{claim} \begin{proof} It is easy to check that the number of the $2$-DISes in $P_3$, $K_{2,2}$, $K_{2,2,2}$, $K_{2,2,2,2}$, $K_3\square K_3$ and $K_4\square K_4$ is $1; 2; 3; 4; 6$, and $24$, respectively. It is also easy to check that joining a new vertex to a graph's every vertex does not increase or decrease the number of the $2$-DISes. Finally, it can be shown by case analysis that these graphs are extremal indeed. \end{proof} Concerning $2$-DISes, product graphs seem to provide the best lower bound, at least much better than those provided by Proposition \ref{constr}. \begin{construction} \label{pelda1} Let $n$ be large enough, and let $$G_n= \alpha K_3\square K_3 +\beta K_4\square K_4, \mbox{ \ with \ } \alpha, \beta \in \mathbb{N}, \beta\leq 8.$$ (Observe that $\alpha$ and $\beta$ is uniquely determined.) \end{construction} In view of Observation \ref{obsit} this implies \begin{proposition}\label{order} $${\hbox{\rm mi}_2(n)}= \Omega({6}^{n/9}) \ \mbox{ and hence } \sqrt[9]{6}\leq \lim\zeta_2(n). $$ \end{proposition} We conjecture that in fact $\sqrt[9]{6}= \lim\zeta_2(n)$ holds, moreover, the graphs listed in Construction \ref{pelda1} are extremal graphs, that is, if $n$ is large enough then $\hbox{\rm mi}_2(n)=\hbox{\rm mi}_2(G)$ holds for an $n$-vertex graph only if $G$ is a graph from Construction \ref{pelda1}. Concerning $k=3$, we conjecture that the Tur\'an graph $T_{3\cdot 3,3}$ provides the order of magnitude, as $\sqrt[9]{3}\leq\zeta_3(n)$. Finally, in general we conjecture that if $k$ is large enough, then $\zeta_k(n)=\zeta_k(K_{k,k})=\sqrt[2k]{2}$. \begin{nota} Let $G^*$ denote the graph constructed from $G$ by adding a new vertex to its vertex set and joining it to all of the vertices of $G$. \end{nota} \noindent Applying the observation $\hbox{\rm mi}_k(v(G), G) = \hbox{\rm mi}_k(v(G)+1, G^*)$, we have \begin{corollary} $${\hbox{\rm mi}_2(n, \mathcal{F}_{con})}= \Omega({6}^{n/9}).$$ \end{corollary} \subsection{ Connections to MDS codes} We begin with some preliminaries about coding theory and MDS codes, for more details we refer to \cite{code}. \begin{definition} Let $C\subseteq \mathbb{F}_q^n$ be a set of codewords in the vector space $\mathbb{F}_q^n$. Defining the Hamming metric $d(*,*)$ on $\mathbb{F}_q^n$, $d(C)$ -- called the minimal distance -- is $d(C)=\min\{d(c, c') : c, c'\in C, c\neq c'\}$. A code $C\subseteq \mathbb{F}_q^n$ is a $q$-ary $(n,M,d)_q$ code if the dimension of the vector space is $n$, $|C|=M$ and the minimal distance is $d$. A code $C$ is linear if it is a subspace of the vector space $\mathbb{F}_q^n$. \end{definition} The Singleton bound for a $q$-ary $(n,M,d)$ code states that $|C|\leq q^{n-d+1}$. If equality holds, then $C$ is said to be a maximum distance separable code, or simply, an MDS code. (Linear) MDS codes are extensively studied, and have strong connections to finite geometries, namely, to the existence of certain arcs in multidimensional projective spaces, see \cite{code}. The problem of determining the number of linear MDS codes in $\mathbb{F}_q^n$ of minimal distance $d$ was essentially posed by Segre, and determined so far only in some special cases \cite{mdses1, mdses2}. We highlight here only \begin{proposition}\label{enum} The number of linear $q$-ary $(n,M,2)_q$ MDS codes is $(q-1)^{n-1}$. \end{proposition} \noindent Much less is known about the number of all MDS codes in $\mathbb{F}_q^n$ of minimal distance $d$. \bigskip \noindent Now we return to Construction \ref{pelda1}. One may suggest that similar graph products with multiple terms yield bounds on $\zeta_k(n)$.\\ Consider $t$ disjoint copies of $(K_3)^k$. The set $V( (K_3)^k)$ can be represented by vectors over $\mathbb{F}_3$ of length $k$, and two of them is adjacent if and only if they differ in exactly $1$ coordinate. Hence a subset $D$ of $V( (K_3)^k)$ is a $k$-DIS if and only if every fixed $k-1$ coordinate determines exactly one element of $D$. In other words, $D$ is a set of $3^{k-1}$ vectors from $\mathbb{F}_3^k$, with minimal Hamming distance $2$. Consequently, $D$ is a MDS code, and the number of $k$-DISes in $ (K_3)^k$ is the number of (not necessarily linear) $(k, 3^{k-1}, 2)_3$ MDS codes. \begin{theorem}\label{mds} The number of $(k, 3^{k-1}, 2)_3$ MDS codes is $3\cdot2^{k-1}$. \end{theorem} \begin{proof} We prove by induction on $k$. For $k=1$, the statement clearly holds.\\ First observe that Proposition \ref{enum} provides $3\cdot2^{k-1}$ general $(k, 3^{k-1}, 2)_3$ MDS codes. Indeed, any linear MDS code $C$ contains the all-zero vector, and their translations $C+(0,\cdots, 0,1)$ and $C-(0,\cdots, 0,1)$ yields suitable new codes. \\ Hence it is enough to prove that the number of $(k+1, 3^{k}, 2)_3$ MDS codes is at most twice the number of $(k, 3^{k-1}, 2)_3$ MDS codes if $k\geq 1$. Observe that if one prescribes the value of arbitrary $k$ coordinates in a $(k+1, q^{k}, 2)_q$ MDS code, then exactly one codeword will fulfill the condition. Consider a $(k+1, 3^{k}, 2)_3$ MDS code. Observe that the set of codewords having zero as first coordinate are corresponding to a $(k, 3^{k-1}, 2)_3$ MDS code. Indeed, the minimal distance does not change while deleting the first coordinate yields a set of $3^{k-1}$ codewords of length $k$. Finally we prove that such a $(k, 3^{k-1}, 2)_3$ MDS code could be obtained from at most two $(k+1, 3^{k}, 2)_3$ MDS codes. To this end, delete the first coordinate of the codewords, and omit those codewords which had $0$ on the first coordinate. Thus we get $2\cdot 3^{k-1}$ vectors in $\mathbb{F}_3^k$. Assign a graph $G$ to this vector set by connecting every pair of vectors which are at Hamming distance $1$. The number of proper two-colorings of this graph by colors '1' and '2' is equivalent to the number of extensions of this vector set by an appropriate first coordinate to get a $(k+1, 3^{k}, 2)_3$ MDS code together with the omitted codewords. Notice that the number of proper two-colorings is at most two for any connected graph. Thus Lemma \ref{finish} finishes the proof. \end{proof} \begin{lemma}\label{finish} $G$, the graph assigned to the codewords of nonzero first coordinate, is connected. \end{lemma} \begin{proof} We prove by contradiction. Assume that $ (v_1, v_2, \ldots v_{k+1})$ and $(w_1, w_2, \ldots, w_{k+1})$ are codewords, $v_1\neq 0 \neq w_1$, furthermore $\textbf{v}=(v_2, \ldots v_{k+1})$ and $\textbf{w}=(w_2, \ldots, w_{k+1})$ are in different component of $G$ and their Hamming distance is minimal w.r.t. pairs of codewords taken from different components of $G$. Note that $\textbf{v}$ and $\textbf{w}$ must differ in at least two coordinates according to our assumption, hence $k\geq 2$. W.l.o.g. we may assume that $v_2\neq w_2$. Let us define $z_2$ by $\{v_2, w_2, z_2\}=\{1,2, 0\}$. Since $\textbf{v}$ and $\textbf{w}$ were at the smallest Hamming distance, $(w_2, v_3, v_4, \ldots v_{k+1})$ or $(v_2, w_3, w_4, \ldots, w_{k+1})$ cannot be vertices of $G$ since it would yield a smaller Hamming distance. But any $k$ prescribed coordinates can be extended to get a codeword in an $(k+1, 3^{k}, 2)_3$ MDS code, thus $(0, w_2, v_3, v_4, \ldots v_{k+1}), (0, v_2, w_3, w_4, \ldots, w_{k+1}) \in C$. Hence $(0, z_2, v_3, v_4, \ldots v_{k+1})$ and $(0, z_2, w_3, w_4, \ldots, w_{k+1})$ do not belong to $C$, which implies that $(z_2, v_3, v_4, \ldots v_{k+1})$ and $(z_2, w_3, w_4, \ldots, w_{k+1})$ are in the vertex set of $G$. Observe that they have more common coordinates than $\textbf{v}$ and $\textbf{w}$ had while they still belong to different components, which is a contradiction. \end{proof} \begin{remark} The proof implies that the graph assigned to the codewords of nonzero first coordinate is bipartite as well, and all $(k, 3^{k-1}, 2)_3$ MDS codes are the translates of linear $(k, 3^{k-1}, 2)_3$ MDS codes. \end{remark} \begin{corollary} $\zeta_k{((K_3)^k)}= \sqrt[3^k]{3\cdot2^{k-1}}$. If $k>2$, this is less then $\zeta_k{(K_{k,k})}= \sqrt[2k]{2}$. \end{corollary} \bigskip \subsection{ Connections to finite geometries} In this subsection we study constructions coming from finite geometries. The first reason to do this is the fact that many extremal structures are provided by geometric constructions in general (see \cite{FS}). In our case they provide a graph family with large number of $k$-DISes. The second reason is that these families have remarkable connections to many interesting subfields of projective geometry, including $m$-fold blocking sets, arcs and tangent-free sets. \begin{definition} Let $PG(2,q)$ denote a finite projective plane over $\mathbb{F}_q$, with point set $\mathcal{P}$ and line set $\mathcal{L}$. Let $G(\mathcal{P, L})$ be the (bipartite) point-line incidence graph of the geometry. Note that $G(\mathcal{P, L})$ is a $q+1$ regular graph on $N=2(q^2+q+1)$ vertices. \end{definition} \begin{definition} An $m$-fold blocking set $B$ in a projective plane is a set of points such that each line contains at least $m$ points of $B$ and some line contains exactly $m$ points of $B$. \end{definition} \begin{definition} In a finite projective plane of order $q$, a $\{K;t\}$-arc is a nonempty proper subset $\mathcal{K}$ of $K$ points of the plane such that every line intersects $\mathcal{K}$ in at most $t$ points and there exists a set of $t$ collinear points in $\mathcal{K}$. A $\{K, 2\}$-arc is simply called a $K$-arc. Note that $\{K, t\}$-arcs and multiple blocking sets are complements of each other in a projective plane, that is, the complement of a $\{K, t\}$-arc is a $(q+1-t)$-fold blocking set. A $\{K;t\}$-arc is called maximal, if $K=(q+1)t-q$, that is, in the case when the size attains the possible maximum \cite{cossu}. \end{definition} It is well known that every line intersects $\mathcal{K}$ in either $0$ or $t$ points in a maximal $\{K;t\}$-arc $\mathcal{K}$ \cite{cossu}. Denniston showed \cite{Denniston} that maximal $\{K;t\}$-arcs exist in projective planes $PG(2,q)$ of even order for all divisors $t$ of $q$. On the other hand, Ball, Blokhuis and Mazzocca proved that no maximal $\{K;t\}$-arcs exists in projective planes of odd order \cite{BBM}. \begin{construction} \label{hyper} Consider a hyperoval $\mathcal{H}$ in $PG(2,q)$, $q>2$ even, that is, a maximal arc of $q+2$ points. Let the set $D\subseteq V(G)$ consist of the lines skew to $\mathcal{H}$ and the points of $\mathcal{H}$. \end{construction} \begin{claim} Construction \ref{hyper} provides a $2$-DIS for any hyperoval of the projective geometry. \end{claim} \begin{proof} Any line intersects a hyperoval in $0$ or $2$ points, thus the secants of the hyperoval are dominated by exactly $2$ vertices of $D\cap\mathcal{P}$. The points of $\mathcal{P}\setminus \mathcal{H}$ are also dominated by at least $2$ vertices of $D\cap\mathcal{L}$ since exactly $q+1-\frac{q+2}{2}$ skew lines are going through any external point of $\mathcal{H}$. Finally, it is clear that the set of skew lines and the vertices of $\mathcal{H}$ form an independent set in $G(\mathcal{P, L})$. \end{proof} There exist other suitable $2$-dominating (or $k$-dominating) independent sets in $G(\mathcal{P, L})$.\\ Let us take a point set $\mathcal{Q}\subseteq \mathcal{P}$ and the lines skew to $\mathcal{Q}$ from $\mathcal{L}$. This provides a $k$-dominating independent set of $G(\mathcal{P, L})$ if and only if the following conditions hold: \begin{itemize} \item[(1)] Any line intersects $\mathcal{Q}$ in $0$ or at least $k$ points, \item[(2)] There exist at least $k$ skew lines to $\mathcal{Q}$ through any point in $\mathcal{P}\setminus\mathcal{Q}$. \end{itemize} \begin{corollary}\label{kicsi} If $\mathcal{Q}$ is a set without tangents on at most $2q-2$ points, the conditions above hold for $k=2$. \end{corollary} \noindent Indeed, $(1)$ holds by definition, while $(2)$ is easy to check since if $l$ lines intersect $\mathcal{Q}$ through a given point in $\mathcal{P}\setminus\mathcal{Q}$, then $|\mathcal{Q}|\geq 2l$ must hold. \medskip Beside hyperovals of planes of even order, various families of sets are known which fulfill the conditions (1) and (2). First, consider the generalization of Construction \ref{hyper}. \begin{construction} \label{k-arc} Consider a maximal $\{K;t\}$-arc $\mathcal{K}$ in $PG(2,q)$, $q$ even. Let $G(\mathcal{P, L})$ be the point-line incidence graph of the geometry, and let the set $D\subseteq V(G)$ consists of the lines skew to $\mathcal{K}$ and the points of $\mathcal{K}$. \end{construction} \begin{claim} Construction \ref{k-arc} provides a $t$-dominating independent set for any maximal $\{K;t\}$-arc of the projective geometry if $t\leq \sqrt{q}$. \end{claim} \noindent Indeed, $(1)$ holds by definition. Concerning $(2)$, at most $q+1-t$ lines can intersect $\mathcal{K}$ through a given point in $\mathcal{P}\setminus\mathcal{K}$, thus $(q+1-t)t\geq |\mathcal{K}|=t(q+1)-q \Leftrightarrow q\geq t^2$. \smallskip The so-called $(q+t, t)$-arcs of type $(0,2,t)$ were investigated by Korchm\'aros, Mazzocca, G\'acs and Weiner \cite{korchmaros, gacs}. These are pointsets of $q+t$ points in $PG(2,q)$ such that every line meets them in either $0, 2$ or $t$ points, $2<t<q$. It is easy to see that a necessary condition for their existence is that $t$ divides $q$ and $q$ is even. In \cite{korchmaros} the authors construct an infinite series of examples whenever the field $GF(q/t)$ is a subfield of $GF(q)$. G\'acs and Weiner \cite{gacs} added further geometric and algebraic constructions, moreover, applying a projecting method to maximal $\{2^s(q+1)-q, 2^s\}$-arcs, they presented $(q^{h-1}(2^s(q+1)-q), q^{h-1}2^s)$-arcs of type $(0,2^s,2^sq^{h-1})$ with $h\in \mathbb{Z}^+$. Observe that these sets are examples for $k$-DISes with $k>2$ as well. So far, we have seen tangent-free sets only if $q$ is even. For any odd prime power $q>5$, Blokhuis, Seres and Wilbrink presented a suitable set of $2q-2$ points arises from the symmetric difference of two conics \cite{BSW}, which provides a $2$-DIS via Corollary \ref{kicsi}. For $q$ prime, no example is known having fewer vertices. If $q=p^h$, $h>1$, Lavrauw, Storme and Van de Voorde constructed a set without tangents of size $q+(q-p)/(p-1)<2q-2$ \cite{LSV}. Up to now, this is the smallest known tangent-free pointset in the $q$ odd case. The main idea was to apply the following result. Consider a set $\mathcal{S}$ of $q$ affine points in $PG(2, q)$, $p > 2$, and let $D$ be the set of determined directions of $\mathcal{S}$, lying on the ideal line . If $|D| < (q + 3)/2$, then $\mathcal{S}$ together with the complement of $D$ w.r.t. the ideal line is a set without tangents. This was observed and applied by Blokhuis, Brouwer and Sz\H onyi \cite{BBS}, showing a set without tangent of size $2q-q/p$. \section{Proof of the upper bounds of Theorem \ref{fo1} and \ref{fo2}} In Section 4 we proved a lower bound on $\hbox{\rm mi}_2(n)$ in Proposition \ref{order} which provides $\Theta(1,22^n)<~\hbox{\rm mi}_2(n)$. This section is devoted to the results on upper bounds. Following the idea of F\"uredi \cite{furedi}, the approach is inductive. We begin with a general upper bound which highlights the key concept. \begin{proposition}\label{upper1} Let $\alpha_k:=\max_{d\in \mathbb{Z}^+} \{ \sqrt[d+1]{\frac{k+d}{k}}\}$. Then $\hbox{\rm mi}_k(n)= O(\alpha_k^n)$. \end{proposition} \begin{proof} Let $\delta$ denote the minimal degree in a graph $G$, and let $v$ be a vertex of minimal degree in $G$. Any $k$-dominating independent set of $G$ contains either $v$ and none of $N(v)$, or at least $k$ vertices of $N(v)$. The number of $k$-DISes containing $v$ is evidently at most $\hbox{\rm mi}_k(n-\delta-1)$, while the number of $k$-DISes not containing $v$ is at most $\frac{\delta}{k}\hbox{\rm mi}_k(n-\delta-1)$. Indeed, any $w\in N(v)$ appears in at most $\hbox{\rm mi}_k(n-\delta-1)$ $k$-DISes, and the $k$-dominating property concerning the vertex $v$ implies that we counted any such $k$-dominating independent set at least $k$ times. Hence $\hbox{\rm mi}_k(n)\leq (1+\frac{\delta}{k})\hbox{\rm mi}_k(n-\delta-1)$, and the statement follows. \end{proof} \begin{remark} Comparing this result with Theorem \ref{alap}, Proposition \ref{upper1} determined the right order of magnitude in the case $k=1$. \end{remark} \begin{corollary} $\hbox{\rm mi}_2(n)< \sqrt[3]{2}^n \ \ \mbox{where} \ \ \sqrt[3]{2}\approx 1,26$. \end{corollary} In order to prove the upper bound of Theorem \ref{fo1}, we refine the above result. The main idea is to improve the bounds if the minimal degree is less then $4$. \begin{theorem}\label{upperb2} $\hbox{\rm mi}_2(n)<\sqrt[5]{3}^n \ \ \mbox{where} \ \ \sqrt[5]{3}\approx 1,2457.$ \end{theorem} \begin{proof} Define $\tau:=\sqrt[5]{3}$. We prove by induction. Note that $\hbox{\rm mi}_2(0)\leq \tau^0$ and $\hbox{\rm mi}_2(1)\leq \tau^1$ trivially holds and assume that $\hbox{\rm mi}_2(i)\leq \tau^i$ holds for $i=0,\ldots , n-1$. Notice that the deletion of possible isolated vertices does not affect the number of $2$-DISes. Assume first that $\delta=1$ in $G$. Consequently, $\hbox{\rm mi}_2(n, G)\leq \hbox{\rm mi}_2(n-2)\leq \tau^{n-2}\leq \tau^{n}$ as vertices of degree $1$ must be in the $2$-dominating set in contrast with their neighbors. Next, suppose that $d(v)=\delta=2$. This implies \begin{equation}\label{ketto} \hbox{\rm mi}_2(n, G)\leq \hbox{\rm mi}_2(n-3)+\hbox{\rm mi}_2(n-4)\end{equation} since the $2$-DISes are either formed by $v$ and a $2$-DIS in $G\setminus N[v]$ or formed by $w_1, w_2 \in N(v)$ and a $2$-DIS in $G\setminus ~( N[w_1]\cup~N[w_2])$. Let $\tau_1$ be the unique positive root of $P(x)=x^4-x-1$. ($\tau_1\approx 1,22$.) Then inequality \eqref{ketto} implies that $\hbox{\rm mi}_2(n, G)\leq \hbox{\rm mi}_2(n-3)+\hbox{\rm mi}_2(n-4)\leq \tau^{n-3}+\tau^{n-4}<\tau^n$ as $\tau_1< \tau$. Let us suppose $d(v)=\delta=3$. If $|N(w_i)\cup~N(w_j)]|\geq 5$ for all pairs of vertices $w_i\neq w_j\in N(v)$ and $N(v)$ is an independent set, then \begin{equation}\label{harom} \hbox{\rm mi}_2(n, G)\leq \hbox{\rm mi}_2(n-4)+\hbox{\rm mi}_2(n-7) + 2\hbox{\rm mi}_2(n-8).\end{equation} Indeed, the $2$-DISes are either formed by $v$ and a $2$-DIS in $G\setminus N[v]$, or formed by $w_1, w_2 \in N(v)$ and a $2$-DIS in $G\setminus ~( N[w_1]\cup~N[w_2])$, or formed by $w_1, w_3 \in N(v)$ and a $2$-DIS in $G\setminus ~( N[w_1]\cup~N[w_3]\cup \{w_2\})$, or formed by $w_2, w_3 \in N(v)$ and a $2$-DIS in $G\setminus ~( N[w_2]\cup~N[w_3]\cup \{w_1\})$. Let $\tau_2$ be the unique positive root of $P(x)=x^8-x^4-x-2$. ($\tau_2\approx 1,241$.) Then inequality \eqref{harom} implies that $\hbox{\rm mi}_2(n, G)\leq~ \hbox{\rm mi}_2(n-4)+~\hbox{\rm mi}_2(n-~7)+~2\hbox{\rm mi}_2(n-~8)\leq \tau^{n-4}+\tau^{n-7}+2\tau^{n-8}<\tau^n$ as $\tau_2< \tau$.\\ What if $|N(w_i)\cup~N(w_j)|\geq 5$ does not hold for some $w_i\neq w_j\in N(v)$? Then every $2$-DIS which does not contain $v$ must contain both $w_i$ and $w_j$. Indeed, one of them must be in the set $D$ to dominate $v$, but then the other one cannot be $2$-dominated, thus it must be in the $D$ as well. Hence we could bound the number of $2$-DISes by $\hbox{\rm mi}_2(n-4)+~\hbox{\rm mi}_2(n-~5)$, and the inequality $\hbox{\rm mi}_2(n, G)<\tau^n$ follows easily. \\ Finally, we have to handle the case when $|N(w_i)\cup~N(w_j)|\geq 5$ holds for every $w_i, w_j\in N(v)$ but $N(v)$ induces at least one edge. W.l.o.g, $w_1w_2\in E(G)$ and then we miss the $2$-DISes where $w_1$ and $w_2$ were both part of $D$ in inequality \eqref{harom}, which yields \begin{equation}\label{negy} \hbox{\rm mi}_2(n, G)\leq \hbox{\rm mi}_2(n-4)+2\hbox{\rm mi}_2(n-7)\end{equation} to hold in this case. Observing that the unique positive root $\tau_3$ of $x^7-x^3-2$ is less then $\tau$, we conclude to $\hbox{\rm mi}_2(n, G)<\tau^n$ again. At last, applying the proof of Proposition \ref{upper1} to $\delta\geq 4$, we get $$\hbox{\rm mi}_2(n)\leq \left(\frac{2+\delta}{2}\right)\hbox{\rm mi}_2(n-\delta-1).$$ The fact $$\max_{d\in\mathbb{Z}, d\geq 4} \left\{ \sqrt[d+1]{\frac{2+d}{2}}\right\}= \sqrt[5]{\frac{6}{2}}=\tau $$ thus completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{fo2}, upper bound] Finally, to obtain the upper bound in Theorem \ref{fo2}, we only have to observe two facts. On the one hand, we can assume that $\delta\geq k$ holds for the minimal degree of $G$, similarly to the proof of Theorem \ref{upperb2}. Indeed, otherwise we would get $\hbox{\rm mi}_k(n, G)\leq \hbox{\rm mi}_k(n-\delta)$. On the other hand, easy computation shows that $\sqrt[d+1]{\frac{k+d}{k}}$ is a monotone decreasing function of $d$ from $d=k$, if $k$ is fixed. Thus $\hbox{\rm mi}_k(n, G)\leq 2\cdot \hbox{\rm mi}_k(n-k-1)$, and the upper bound follows. \end{proof} \section{Concluding remarks and open problems} In this final chapter we gather some problems and conjectures related to the discussed results. \begin{problem} \label{mdscode} Determine or bound the number of all MDS codes, especially the number of $q$-ary MDS codes of type $(n,M,2)_q$ \end{problem} \begin{remark} The result is related to the number of $q-1$-coloring of certain Hamming-graphs in view of the proof of Theorem \ref{mds}. Note that this problem is widely open even if we consider linear MDS codes, and on the other hand $q$ is not required to be a prime power. \end{remark} \begin{problem} \label{vegessik} Determine the number of maximal independent sets of the incidence graph $G(\mathcal{P, L})$ of the projective geometry $PG(2,q)$ in terms of the number of vertices. \end{problem} \begin{conjecture} For every $k$, there exists a graph $G$ for which $\zeta_k(G)=\lim \zeta_k(n)$ holds. \end{conjecture} \begin{conjecture}($\sqrt[9]{6}$-conjecture) The maximal number of $2$-DISes in $n$-vertex graphs is $\Theta(\sqrt[9]{6}^n)$. That is, $\zeta_2(n)=\zeta_2(K_3\square K_3)$. Moreover, Construction \ref{pelda1} provides the extremal graphs for the function $\hbox{\rm mi}_2(n)$ if $n$ is large enough. \end{conjecture} \begin{conjecture} The maximal number of $k$-DISes in $n$-vertex graphs is attained for the disjoint union of $K_{k,k}$ graphs for $k>3$ if $2k|n$. \end{conjecture} \begin{problem} Describe large graph families $\mathcal{F}$ for which \begin{itemize} \item $\hbox{\rm mi}_k(n, \mathcal{F})\leq 1$, \item $\hbox{\rm mi}_k(n, \mathcal{F})$ is bounded by a polynomial of $n$, \item $\lim\zeta_k(n, \mathcal{F})=1$. \end{itemize} \end{problem} This problem is motivated by the results of Farber, Hujter and Tuza \cite{HT2}. \begin{conjecture} $\hbox{\rm mi}_k(n, \mathcal{F})$ is not bounded by a polynomial of $n$ for the graph family of incidence graphs of projective planes. \end{conjecture} \bigskip \noindent {\bf Acknowledgments} I would like to thank Zolt\'an F\"uredi, Tam\'as H\'eger, Mikl\'os Simonovits, Tam\'as Sz\H{o}nyi and Zsolt Tuza for helpful discussions on the topics of this paper. \bigskip
{ "timestamp": "2015-05-08T02:05:56", "yymm": "1504", "arxiv_id": "1504.03224", "language": "en", "url": "https://arxiv.org/abs/1504.03224" }
\section{Introduction} \label{sec:intro} Recent observations of exoplanets have revealed a large number of close-in low-mass planets (e.g., \citealt{schneider_etal11}; \citealt{wright_etal11}). As of January 2015, 337 systems harbor 839 planets with masses $M < 100 M_\oplus$ (or with radii $R < 10 R_\oplus$) and semimajor axes $a < 1 {\rm AU}$ (or with orbital periods $P < 200 {\rm day}$). We define these as ``close-in super-Earths.'' They have a semimajor-axis distribution centered on 0.1 AU. Super-Earths in each system are generally confined within a few tenths of an AU. Although some of these close-in super-Earths are in or near first-order mean motion resonances (especially in 3:2 resonances), a large number of close-in super-Earths are not in mean motion resonances (e.g., \citealt{mayor_etal09}; \citealt{lissauer_etal11b}). Typical orbital separations between these planets are $10-30 r_{\rm H}$, where $r_{\rm H}$ is the mutual Hill radius, which is similar to orbital separations between the solar system terrestrial planets. Period ratios of adjacent pairs lie between the 4:3 and 3:1 resonances, respectively. It is estimated that a large number of super-Earths have small eccentricities $e\sim 0.01-0.1,$ while some of them can have high eccentricities up to $e\sim 0.5$ (e.g., \citealt{moorhead_etal11}). The mutual inclinations between planetary orbits could be estimated in a fraction of transiting systems from the Kepler catalog and appear to be low with an average of $i\sim 0.03$ \citep{fabrycky_etal14}. \citet{hansen_murray12,hansen_murray13} present \textit{N}-body simulations of the in situ formation of close-in super-Earths from embryos that are placed between 0.05 AU and 1 AU. To account for the masses of the known planetary systems, they had to assume that up to 100 Earth masses of solids existed in the disk within 1 AU, implying a surface density of solids much higher than for the minimum mass solar nebula (MMSN) (\citealt{weidenschilling77,hayashi81}). Assuming a nominal gas/solid density ratio of 100, this would imply a very massive protoplanetary disk that would probably be gravitationally unstable. However, \citet{chatterjee_tan14,chatterjee_tan15} propose that the disk may be enriched in solids relative to the gas thanks to the inward migration of dust grains, pebbles, and planetesimals. Thus, it may be legitimate to assume a very high surface density of solids embedded in the inner protoplanetary disk with a mass of gas comparable to gas in the MMSN model. The simulations of \citet{hansen_murray12,hansen_murray13}, nevertheless, are quite simplistic. They start from a population of protoplanets of high masses (sometimes multiple Earth masses), which are supposed to have already achieved the completion of the oligarchic growth process \citep{kokubo_ida98}. No planetesimals are considered, and the effects of the gas, in relation to the migration and eccentricity or to the inclination damping of the protoplanets' orbits, are not taken into account. With these assumptions, their simulations suggest that an in situ accretion without gas can explain the distributions of orbital periods and eccentricities of observed super-Earths. However, their simulations did not reproduce the distribution of period ratios between adjacent super-Earths, because they had a deficit of close pairs and resonant pairs (see Fig.~15 in \citet{hansen_murray13}). Interestingly, the same deficit was found by \citet{cossou_etal14} in a different model accounting for the migration of planetary embryos from several AUs away. On the other hand, a model by \citet{ogihara_ida09} with gas drag and type I migration led to more separated non-resonant pairs, when type I migration was reduced by about a factor of 100. In this paper, we revisit the process of in situ formation \citep{hansen_murray12,hansen_murray13} with more complete simulations that start from a population of small planetary embryos (i.e., Mars-mass) and planetesimals, both carrying cumulatively 50\% of the solid mass in the disk. These initial conditions are typical of terrestrial-planet simulations (e.g., \citealt{obrien_etal06}). Thus, we do not assume that the protoplanets have reached the completion of the oligarchic growth process, but we simulate that process from a much more primordial state. Moreover, we consider the action of the gas that forces a migration and eccentricity/inclination damping of the embryos and planetesimals. Our aim is to clarify differences from the studies of \citet{hansen_murray12,hansen_murray13} and to understand, with improved and more realistic simulations, whether in situ formation can explain the observed properties of the close-in super-Earth systems. We performed simulations with different disk models (i.e., the amount of solids available), but we also present a model without migration and one without gas for comparison. In addition, we extend our model by including the accretion of primitive atmospheres onto super-Earths. The analyses of transit observations with radial velocity measurements or analyses of transit-time variations show that most of the planets larger than 2.5 Earth radii have very low densities (\citealt{marcy_etal14,hadden_lithwick14}), so are thought to have thick H/He atmospheres (up to 10-20\% by mass). We therefore consider the acquisition of H/He envelopes and discuss whether observed low-density super-Earths can be explained by our in situ accretion model. The rest of the paper is organized as follows. In Sect.~\ref{sec:model}, we describe our model and methods of \textit{N}-body simulations; in Sect.~\ref{sec:results} we give a series of the results of our simulations; in Sect.~\ref{sec:atmosphere} we present results of simulations that include accretion of H/He atmosphere; in Sect.~\ref{sec:discussion} the conclusions are provided. \section{Model and methods} \label{sec:model} \subsection{Disk model} We assume that the gas distribution is similar to the one of the classical MMSN model. Thus, for the gas surface density, we assume \begin{eqnarray} \label{eq:initial_disk} \Sigma_{\rm g} = 2400 \left(\frac{r}{1 \rm{AU}} \right)^{-3/2} \exp\left(-\frac{t}{1 {\rm Myr}}\right)\,\mathrm{g\, cm}^{-2}, \end{eqnarray} where $\Sigma_{\rm g}$ and $r$ are the gas surface density and the radial distance from the central star, respectively. The value of $\Sigma_{\rm g}$ is 1.4 times the value in the MMSN. The gas dissipation is modeled as an exponential decay with the depletion timescale of 1 Myr. We set the disk inner edge at $r = 0.1 {\rm AU}$. The inner edge of the disk is expected to be at the radius where the orbital period equals the stellar rotation period. This could be a factor of two-three smaller than we assume. Our assumption of an inner edge at 0.1 AU is dictated by constraints due to computational time. Our results concerning the position of the final planets relative to the disk's inner edge can scale with the assumed edge's position. The temperature distribution is that for an optically-thin disk \citep{hayashi81}, so that the disk scale height is \begin{eqnarray} \label{eq:scale_height} h = 0.047 \left( \frac{r}{1 {\rm AU}} \right)^{5/4} {\rm AU}. \end{eqnarray} \subsection{Initial conditions and numerical method} We chose an initial solid distribution similar to that of \citet{hansen_murray12}; that is, $50 M_\oplus$ in total are placed between 0.1 and 1AU. In our standard model, we set 250 embryos with a mass of $M = 0.1 M_\oplus$ and 1250 planetesimals with a mass of $M = 0.02 M_\oplus$ in such a way as to keep the radial distribution of the solid surface density proportional to $r^{-3/2}$. Planetesimals gravitationally interact with the embryos but not with each other. This set-up is typical of successful simulations for the growth of the terrestrial planets in our solar system. By adopting the same set-up, we explore the effects of the enhanced solid distribution and much shorter orbital distances. There may be some caveats associated with the initial condition. As shown in Sect.~\ref{sec:results}, once a disk of planetesimals and planetary embryos is set, the growth of planets in the close-in region is quite rapid. If the planetesimal formation process takes a longer time than the planet-growth timescale, it would be important to take planetesimal formation into account in simulations. However, the formation of planetesimals is not yet fully understood. Therefore, in this study, we do not attempt to include the planetesimal formation; instead, we start simulations with already formed planetesimals. We note, however, that it is unlikely that planetesimal formation takes a timescale comparable to the disk lifetime ($\sim$ Myr) for two reasons. First, this timescale would correspond to 30 million orbital periods, and it is difficult to understand why it should take so long. For instance, in the solar system chondritic planetesimals formed in about one million orbits. Second, the planetesimal formation process is most likely related to the drift of small particles through the disk \citep{johansen_etal14}, which is due to gas drag, so\ very likely most of the mass was fed to the inner disk at an early time, when the gas density was higher. A late formation of the planetesimals would require that the small particles drift into the inner part of the disk only when the disk is disappearing. We think that this possibility is difficult to envision. \begin{table} \caption{List of models. In Model~2, the initial solid amount is decreased by a factor of two from Model~1. Model~4 corresponds to the model of \citet{hansen_murray12,hansen_murray13}.} \label{tbl:list} \centering \begin{tabular}{l l l} \hline\hline Model& Type I migration& $e,i$-damping\\ \hline 1& yes& yes\\%8,13,15,59,60,61,71,72,73,74 2& yes& yes\\ 3& no& yes\\ 4& no& no\\ \hline \end{tabular} \end{table} Table~\ref{tbl:list} shows the list of simulations for each model. In Model~2, the total mass inside 1AU is decreased by a factor of two (the total mass is $25 M_\oplus$). In Model~3, type I migration is neglected, but eccentricity damping is still considered. In Model~4, the effect of gas is ignored the same as in the simulation by \citet{hansen_murray12,hansen_murray13}. Our \textit{N}-body code is based on SyMBA (Duncan et al. 1998), modified so that the effects of the gas disk are included according to the formulae reported in Sect.~\ref{sec:damping}. When bodies collide with each other, the bodies are merged by assuming perfect accretion. The physical radius of a body is determined by its mass, assuming an internal density of $\rho = 3 {\rm ~g~cm^{-3}}$. The inner boundary of the simulation is set to $r = 0.05 {\rm AU}$. We use a 0.0004-year timestep for integrations. \subsection{Effects of gas} \label{sec:damping} Eccentricities, inclinations, and semimajor axes are damped by disk interaction. Planets with more than roughly $0.1 M_\oplus$ suffer the tidal damping by the density wave, while planetesimals undergo aerodynamical gas drag. \subsubsection{Damping for embryos} The eccentricity damping timescale for embryos, $t_e$, is given by \citep{tanaka_ward04} \begin{eqnarray} \label{eq:e-damp} t_e &=& \frac{1}{0.78}\left(\frac{M}{M_*}\right)^{-1} \left(\frac{\Sigma_{\rm g} r^2}{M_*}\right)^{-1} \left(\frac{c_{\rm s}}{v_{\rm K}}\right)^{4} \Omega^{-1}\nonumber\\ &\simeq& 3 \times 10^2 \left(\frac{r}{1 {\rm AU}}\right)^2 \left(\frac{M}{M_\oplus}\right)^{-1} \left(\frac{M_*}{M_\odot}\right)^{-1/2} {\rm ~yr}, \end{eqnarray} where $M_*$ is the stellar mass, $L_*$ the stellar luminosity, $c_{\rm s}$ the sound speed, $v_{\rm K}$ the Keplerian velocity, and $\Omega$ the orbital frequency, respectively. Here the relative motion between gas and planets is assumed to be subsonic ($e v_{\rm K} \lesssim c_{\rm s}$). For planets with high eccentricities and inclinations, we include a correction factor according to Eqs.~(11) and (12) of Creswell \& Nelson (2008). The migration timescale $t_a$ is given by \citep{tanaka_etal02,paardekooper_etal11} \begin{eqnarray} \label{eq:a-damp} t_a &=& \frac{1}{\beta} \left(\frac{M}{M_*}\right)^{-1} \left(\frac{\Sigma_{\rm g} r^2}{M_*}\right)^{-1} \left(\frac{c_{\rm s}}{v_{\rm K}}\right)^{2} \Omega^{-1}\nonumber\\ &\simeq& 2 \times 10^5 \beta^{-1} \left(\frac{r}{1 {\rm AU}}\right)^{3/2} \left(\frac{M}{M_\oplus}\right)^{-1} \left(\frac{M_*}{M_\odot}\right)^{1/2} {\rm ~yr}, \end{eqnarray} where $\beta$ is a coefficient that determines the direction and speed of type I migration. The type I migration torque depends on the Lindblad torque, the barotropic part of the horseshoe drag, the entropy-related part of the horseshoe drag, the barotropic part of the linear corotation torque, and the entropy-related part of the linear corotation torque. \citet{paardekooper_etal11} derived the total type I migration torque, including both saturation and the cutoff at high viscosity. We write the migration coefficient $\beta$ entering in Eq.~(\ref{eq:a-damp}) in the form \begin{eqnarray} \beta = \beta_{\rm L} + \beta_{\rm c,baro} + \beta_{\rm c,ent}, \end{eqnarray} where $\beta_{\rm L},$ $\beta_{\rm c,baro},$ and $\beta_{\rm c,ent}$ are related to the Lindblad torque, the barotropic part of the corotation torque, and the entropy-related part of the corotation torque, respectively. Each formula is given by Eqs.~(11)-(13) in \citet{ogihara_etal15}. In addition, the corotation torque decreases as the planet eccentricity increases (e.g., \citealt{bitsch_kley10}). We also consider this effect using the following formula \citep{fendyke_nelson14}: \begin{eqnarray} \beta_{\rm C,baro}(e) &=& \beta_{\rm C,baro} \exp\left(-\frac{e}{e_{\rm f}} \right),\\ \beta_{\rm C,ent}(e) &=& \beta_{\rm C,ent} \exp\left(-\frac{e}{e_{\rm f}} \right), \end{eqnarray} where $e_{\rm f} = 0.5h/r + 0.01$. \subsubsection{Damping for planetesimals} The aerodynamical gas drag force per unit mass is \citep{adachi_etal76} \begin{eqnarray} \textbf{\textit{F}}_{\rm aero} = - \frac{1}{2M} C_{\rm D} \pi r_{\rm p}^2 \rho_{\rm g} \Delta u \Delta \textbf{\textit{u}}, \end{eqnarray} where $C_{\rm D}, r_{\rm p}, \rho_{\rm g}$, and $\Delta \textbf{\textit{u}}$ are the gas drag coefficient, the physical radius, the density of gas disk, and the velocity of the body relative to the disk of gas, respectively. Although our planetesimals have a mass of 0.02 Earth masses, we consider them as super-planetesimals, representing a swarm of much smaller objects cumulatively carrying the same total mass. From the size distribution of the asteroid belt, we think that most planetesimals had a physical size of 50km in radius \citep{morbidelli_etal09}. For the calculation of gas drag, this is the size we assume. For the value of $C_{\rm D}$, we use the same definition as in previous studies (e.g., \citealt{adachi_etal76}; Brasser et al. 2007), which depends on the Mach number, the Knudsen number, and the Reynolds number. \section{Results} \label{sec:results} \subsection{Outcomes of Models~1 and 2} The formation of close-in super-Earth systems can be schematically divided into three phases: (i) growth of embryos from planetesmals, (ii) migration of embryos, and (iii) gas depletion and long-term orbital evolution. Through a series of simulations, we find that the first accretion stage is extremely rapid ($\sim$0.1 Myr, much faster than the typical timescale of several 10 Myr observed in terrestrial planet simulations). Thus, the effect of the enhanced amount of solid is not just that of producing bigger planets: it also reduces the formation timescale. The closer proximity to the central star (i.e., shorter orbital periods) also favors a much faster accretion (see also \citealt{lee_etal14}). Figure~\ref{fig:snap} shows snapshots of the evolution of one simulation for Model~1and indicates the gas surface density at each time (right axis). Figure~\ref{fig:t-a}(a) shows the time evolution of the semimajor axis for this run. The color of lines indicates the eccentricity of the planets (see color bar). At $t = 10^3 {\rm yr}$, almost all planetesimals initially placed inside $r \simeq 0.2 {\rm AU}$ have been accreted by embryos. The first accretion phase ends before $t=0.01 {\rm Myr}$ and $0.1 {\rm Myr}$ inside $r\simeq 0.4 {\rm AU}$ and $\simeq 0.7 {\rm AU}$, respectively. \begin{figure} \resizebox{1.0 \hsize}{!}{\includegraphics{snap.eps}} \caption{Snapshots of a system for Model~1. Filled circles represent bodies. The size of the circles is proportional to the radius of the body. The smallest circle represents a 0.02 Earth-mass body, while the largest one represents a 33 Earth-mass body. The solid line indicates the gas surface density (right axis). } \label{fig:snap} \end{figure} \begin{figure} \resizebox{1.0 \hsize}{!}{\includegraphics{t_a_run13.eps}} \resizebox{1.0 \hsize}{!}{\includegraphics{t_a_run8.eps}} \caption{Time evolution of planets for Model~1. The filled circles connected with solid lines represent the sizes of planets. The smallest circle represents a 0.2 Earth-mass embryo, while the largest ones represent a 33 Earth-mass planet in panel~(a) and 35 Earth-mass planet in panel~(b). The color of line indicates the eccentricity (color bar). } \label{fig:t-a} \end{figure} Because of the short accretion timescale, it is thus not correct to neglect the gas effects as in Hansen and Murray's works. In fact, the protoplanets become massive well before the gas disk is substantially depleted. The gas forces the planets to migrate inward. All planets would be lost into the star if there were no inner edge of the disk. With a sharp disk inner edge, the innermost planet is trapped at the edge by the planet-trap effect \citep{masset_etal06}. The other planets pile up in mutual mean motion resonances with the former. The typical commensurabilities are between 5:4 and 6:5 with orbital separations of $\simeq 5-10 r_{\rm H}$, thus the final orbits are packed near the disk's edge. The resonant configurations of two bodies that undergo convergent migration are determined by the mass of the bodies and the relative migration speed (e.g., \citealt{mustill_wyatt11}; \citealt{ogihara_kobayashi13}). \citet{ogihara_kobayashi13} have derived the critical migration timescale for capture into first-order mean motion resonances. They found that if the relative migration timescale is shorter than $t_{a,{\rm crit}} \simeq 1 \times 10^5 (M_1/M_\oplus)^{-4/3} T_{\rm K}$, where $M_1$ is the mass of larger body and $T_{\rm K}$ the Keplerian period, bodies can only be captured in resonances closer than the 4:3 resonance (see Eq.~(6) and Table~2 in \citealt{ogihara_kobayashi13}). In the result of Model~1, the mass of migrating embryos is $\simeq 0.5 M_\oplus$ and the mass of planets near the edge is $\sim 1 M_\oplus$, so the relative migration timescale is $\simeq 10^5 T_{\rm K}$ (Eq.~(\ref{eq:a-damp})). Migrating embryos therefore settle into closely packed configurations. Five planets form at the end of the simulation presented in Figs.~\ref{fig:snap} and \ref{fig:t-a}(a). In this run, the planets do not exhibit orbital instability after gas depletion because the number of planets in the system is small and the planets are in mean motion resonances, leading to a long-lasting orbital stability (\citealt{chambers_etal96}; \citealt{matsumoto_etal12}). \begin{figure} \resizebox{0.9 \hsize}{!}{\includegraphics{a_m_sum_run13.eps}} \resizebox{0.9 \hsize}{!}{\includegraphics{a_m_exo.eps}} \caption{Results of 10 simulations of Model~1 (panel (a)). Observed close-in super-Earths systems (panel (b)). } \label{fig:a-m} \end{figure} We performed ten simulation runs for each model with different initial positions of embryos and planetesimals. The results are qualitatively the same: final orbits are compact near the disk's inner edge. In some runs, the system undergoes orbital instability during the third phase after 1~Myr, resulting in non-resonant and relatively separated orbits with a smaller number of planets (see Fig.~\ref{fig:t-a}(b) for example). Figure~\ref{fig:a-m}(a) shows the final orbital configurations of all ten runs, where the planets that formed through the same run are connected with a line. We observe steep mass gradients in this figure; the largest bodies are located near the edge, and the planetary mass monotonically decreases when the semimajor axis increases. This is because, in the presence of a strong migration torque, the resonant system is stable only if the innermost planet is the most massive, as already found by \citet{morbidelli_etal08}. If originally the innermost planet is not the most massive, an instability typically occurs. The first and the second planets have encounters with each other, and the system stabilizes in resonance only after that the two planets have exchanged their relative positions. The same is true for the second, relative to the third planet and so forth. Figure~\ref{fig:a-m}(b) shows orbital configurations of observed close-in super-Earth systems, in which the mass and semimajor axis of all planets are known. It is clear that the bulk architecture of the observed systems is inconsistent with the simulated planetary systems. No steep mass gradients are observed\footnote{The mass gradient is steep in the Kepler-101 system, where the innermost planet has 51 Earth mass and the second planet is about 3 Earth mass.}, and the bulk architecture of the observed systems cannot be reproduced through the simulations of model~1. If embryos migrate from beyond 1 AU, the mass gradient would be shallower and/or the outer boundary of the final planet distribution at around 0.2 AU would be removed. This would suggest that a ``migration model'' yields better results in reproducing the observed close-in super-Earth systems, which should be investigated by future \textit{N}-body simulations. \begin{figure*} \begin{center} \resizebox{0.7 \hsize}{!}{\includegraphics{p_n_sum.eps}} \end{center} \caption{Comparison of the distributions of period ratios of adjacent pairs of planets for (a) observation and (b)-(d) simulations. The distribution of period ratios is presented as a histogram (see the left y-axis) and as a cumulative distribution (see the right y-axis). Panels~(b), (c), (d), and (e) show the results of Models~1, 2, 3, and 4, respectively. The dashed lines in panels~(b)-(d) represent the cumulative distribution of observed planets shown in red in panel (a). The vertical lines indicate locations of mean motion resonances. } \label{fig:p-n} \end{figure*} Figure~\ref{fig:p-n}(a) shows period ratios of adjacent pairs of observed close-in super-Earths indicating period ratios of first-order mean motion resonances (e.g., 2:1 and 3:2) and the cumulative distribution of the period ratios (right axis). Most pairs have period ratios between 4:3 and 3:1, and there are a few pairs that have period ratios lower than 4:3. Figure~\ref{fig:p-n}(b) shows the period ratio distribution for the ten runs for Model~1 and a copy of the observed distribution. Some pairs are in closely-spaced resonances (e.g., 5:4), and others have been knocked out of resonant orbits during the late instability. Although very closely spaced pairs ($\lesssim$ 4:3) can form in Model~1, the cumulative distribution of observed close-in super-Earths is not matched by the results of Model~1. In fact, a Kolmogorov-Smirnov (K-S) test indicates that the cumulative distributions are statistically different $(Q_{\rm KS} \ll 0.01)$. \begin{figure} \resizebox{1.0 \hsize}{!}{\includegraphics{e_n_sum.eps}} \caption{Comparison of cumulative eccentricity distributions between observed close-in super-Earths (thin solid line) and the planets formed through simulations (thick lines, see legend). The thin dotted line indicates the cumulative eccentricity distribution of observed super-Earths, in which each eccentricity is assumed to be $e - \sigma$. } \label{fig:e-n} \end{figure} Figure~\ref{fig:e-n} shows cumulative distributions of the eccentricities of the observed close-in super-Earths and of all the results produced in the simulations corresponding to the same model. The general trend is that the eccentricity of the results of Model~1 is smaller than for close-in super-Earths. In Model~1, planets with $e < 0.03$ account for 66 percent of all bodies, while exoplanets with $e < 0.03$ make up only 26 percent of all planets. One reason for the small eccentricity in Model~1 is that the number of planets in a system is small $(N = 3-6),$ and the orbital stability time is long \citep{chambers_etal96}, which inhibits giant impacts during gas dispersal (see Fig.~\ref{fig:t-a}(a) for example). In addition, even if planets undergo orbital instability as in Fig.~\ref{fig:t-a}(b), the eccentricity is not highly excited. This is because the effect of mutual scattering between planets is limited by the small number of planets. Eccentricity damping also operates during the gas dissipation phase due to remnant gas (see Fig.~\ref{fig:t-a}(b), for example). A K-S test indicates that the observed and simulated eccentricity distributions are statistically different $(Q_{\rm KS} \ll 0.01)$. In summary, the results of Model~1 cannot reproduce the bulk properties (period ratio, eccentricity) of observed close-in super-Earth systems. Eccentricities of exoplanets can be overestimated (see Sect.~\ref{sec:results_model3} for discussion). Thus, our results disagree with those of Hansen and Murray, which is not surprising given that the latter neglected the effect exerted by the disk of gas (particularly the inward migration of proroplanets), not realizing that the super-Earths must form well within the gas-disk lifetime. We also performed ten runs for Model~2, where the initial solid amount is reduced by a factor of two from Model~1, in the hope of observing a slower accretion rate and consequently weaker migration effects. However, the results are qualitatively the same with those of Model~1. In Fig.~\ref{fig:p-n}(c), planetary pairs with their period ratio of $<$ 4:3 account for 78 percent of all pairs, and the cumulative distribution is different from that of observed close-in super-Earths. The results are even more closely packed than those of Model~1 because planets are less vulnerable to orbital instability during the third phase. The eccentricity distribution in Fig.~\ref{fig:e-n} also differs from that of close-in super-Earths. \subsection{Outcomes of Models~3 and~4} \label{sec:results_model3} We then present results of Models~3 and~4, in which we suppress the migration torques exerted by the gas-disk interaction onto the planets (Model~3) or we neglect the presence of gas altogether (Model~4). Clearly, both models are academic. In fact, a general mechanism that suppresses type I migration has never been found. Locally, type I migration can be halted or reversed \citep{paardekooper_mellemal06,bitsch_etal14}, but no global weakening of type I migration has even been demonstrated. As for the absence of gas, this seems inconsistent with the fast growth timescale for the super-Earths. It is difficult to imagine that the gas disappears significantly faster than what we assume above (1 Myr). The reason we present these models is to highlight the role of migration and eccentricity/inclination damping in the results we presented before. \begin{figure} \resizebox{1.0 \hsize}{!}{\includegraphics{t_a_run82.eps}} \resizebox{1.0 \hsize}{!}{\includegraphics{t_a_run16.eps}} \caption{Same as Fig.~\ref{fig:t-a} but for a representative simulation of Model 3 (panel (a)) and Model (4) (panel (b)). } \label{fig:t-a-others} \end{figure} Figures~\ref{fig:t-a-others}(a) and (b) show the typical orbital evolution for Models~3 and~4, respectively. In both cases, planetary systems are not as compact when compared with the results of Models~1 and 2. In the simulations for Model~3, in which type I migration is neglected, planets undergo slow inward migration due to eccentricity damping (see also Sect.~5.2 in \citealt{ogihara_etal14}), and planets are temporarily captured in mutual mean motion resonances before a few Myr. Then, they undergo orbital instability and collide with each other, resulting in non-resonant orbital configurations, which are qualitatively similar to those obtained in the slow-migration simulation of \citet{ogihara_ida09}. In the results for Model~4, planets are never in resonances, which is almost the same as in the simulations of \citet{cossou_etal14} and \citet{hansen_murray12,hansen_murray13}. The averaged mass of the largest bodies is $\simeq 13~ M_\oplus$ (Model~3) and $\simeq 14~ M_\oplus$ (Model~4), which are lower than for Model~1. Figures~\ref{fig:p-n}(d) and (e) show the period ratio distribution of Models~3 and 4, respectively. Interestingly, the results match the observations much better than those of Models~1 and~2. In the results of Model~3, most pairs have period ratios between 4:3 and 3:1, while some pairs are relatively separated (> 3:1). In the results of Model~4, 85 percent of pairs lies between 2:1 and 3:1. In comparison with the observed distribution, Model~3 is a good match to the distribution of exoplanets. A K-S test shows that the distributions are similar with a significance level of $Q_{\rm KS} = 0.24$ for Model~3, while the distribution of Model~4 is not similar to the observed distribution $(Q_{\rm KS} \ll 0.01)$. In Fig.~\ref{fig:e-n}, the eccentricity of the planets produced in Model~3 is generally smaller than in Model~4 because of the eccentricity damping. Compared with the distribution of exoplanets, the K-S test shows that the distributions are different for Model~3 $(Q_{\rm KS} \ll 0.01),$ while the distributions for Model 4 are closer, but are still not satisfactory $(Q_{\rm KS} = 0.026)$. Thus, neither Model~3 nor Model~4 explains the observation, because the first has problems with the eccentricity distribution, the second with the orbital period distribution. However, it is fair to say that the measurement of the eccentricity of exoplanets is still difficult, and the uncertainty on the results is quite large. In particular, it has been shown that eccentricities, which are derived from radial velocity surveys, can be overestimated (e.g., \citealt{shen_turner08}; \citealt{zakamska_etal11}). Therefore, the observed eccentricity distribution in Fig.~\ref{fig:e-n} may shift to lower values. In this case the results of Model~3 might match the eccentricity distribution as well. As an example, we recalculated the eccentricity distribution in a simple way. The eccentricity of each exoplanet is set to $e - \sigma$, where $\sigma$ is the estimated error. The new distribution is indicated in Fig.~\ref{fig:e-n}. We find that the new distribution gives a better match to Model~3 $(Q_{\rm KS} = 0.30)$ rather than Model~4 $(Q_{\rm KS} = 0.010)$. \section{Accretion of primitive atmospheres} \label{sec:atmosphere} An objection often expressed against the in situ accretion model for super-Earths is that observations indicate in many cases (e.g., Kepler-11, see \citealt{lissauer_etal11a}) that these planets have a low bulk density. One would expect planets grown in the inner part of the disk to be rocky, given that the high local disk temperature should not have allowed ice condensation. A possibility, however, is that super-Earths accreted primitive H/He atmospheres, leading to low bulk densities. Recent simulations of the structure and evolution of planetary atmospheres have demonstrated that super-Earths can indeed accrete primordial atmospheres from the protoplanetary disk provided that they do not accrete solids at a high rate \citep{lee_etal14}. As we have seen above, we expect that in situ formation is extremely rapid and reaches completion well within the lifetime of the gas disk. Thus, we expect our planets to be in the condition of a very low accretion rate of solids when there is still gas in the disk, enabling the acquisition of significant atmospheres. In what follows we implement the most recent recipes for gas accretion to compute the mass of atmosphere expected for our super-Earths. \subsection{Model} \label{sec:atmos_model} Once planetary embryos embedded in a gas disk are sufficiently massive (typically the mass of Mars or more), they capture part of the disk gas to have an atmosphere of their own \cite[e.g.,][]{wuchterl+2000}. This atmosphere or envelope grows with the mass of the embryo itself until it becomes greater than the mass of the solid core. At this so-called crossover mass ($M_{\rm env}\sim M_{\rm core}$), accretion enters a runaway phase with the envelope mass increasing exponentially and the planet becoming a giant planet \citep{pollack+1996}. In this process, the accretion of solids has two effects: it increases the mass of the planet and heats the envelope. The first favors the growth of the envelope, but the second leads to an increase in the crossover mass because of a more tenuous envelope. Most works on the growth of giant planet cores have thus focused on obtaining expressions that depend on the accretion rate of planetesimals \citep[e.g.,][]{ikoma_etal01}. However, since in our simulations the accretion of solids stalls while the gas disk is still present, at this later stage, the rate of cooling of the envelope becomes crucial for controlling the growth of the envelope \citep{pollack+1996}. We choose to empirically model the growth of the envelope by fitting the results of \cite{ikoma_hori12}, \cite{piso_youdin14}, and \cite{lee_etal14}, which are works that account for the planetary cooling to calculate the resulting envelope growth. Specifically, we model the envelope mass, $M_{\rm env}$, as \begin{equation} \frac{M_{\rm env}}{M_{\rm core}}=\frac{k_2}{1+k_3} \left[ 1+k_3 \left(\frac{t}{t_{\rm run}}\right)^{1/3}\right]e^{k_1 (t/t_{\rm run}-1)}, \label{eq:Menv} \end{equation} where $M_{\rm core}$ is the core mass and $t_{\rm run}$ the time to get to the crossover mass. As for the coefficients, we use $k_1 = (M_{\rm core} /15 M_\oplus)^{-1},$ $k_2 = 1 + (M_{\rm core} / 40 M_\oplus)$, and $k_3= 9$. Most of the uncertainty is linked to the value of $t_{\rm run}$, which depends critically on the opacities chosen, the cooling rate of the core, and orbital distance. For the present simulations we choose to use as a fiducial value \begin{equation} t_{\rm run}=10^{7} (M_{\rm core}/5 M_\oplus)^{-3}\rm\ yr, \end{equation} which lies in between the results obtained by \citet{ikoma_hori12} and \citet{piso_youdin14} and \citet{lee_etal14}. Then the accretion rate onto the planet is expressed by \begin{equation} \dot{M}_{\rm env} = M_{\rm env}\left[\frac{k_3}{3}\frac{1}{t_{\rm run}^{1/3}t^{2/3}+k_3 t}+\frac{k_1}{t_{\rm run}}\right]. \label{eq:mdot_env} \end{equation} Equation~(\ref{eq:Menv}) implicitly assumes that the disk can supply all the gas that the planet is able to accrete. In reality, this amount is limited by the viscous inflow in the disk at the location of the planet \citep[e.g.,][]{tanigawa_ikoma07}. The accretion rate due to viscous diffusion is \begin{equation} \dot{M}_{\rm vis} \simeq 3 \pi \nu \Sigma_{\rm g}, \label{eq:mdot_vis} \end{equation} where an ``alpha model'' for the disk viscosity is used with $\nu = \alpha c_{\rm s} h$ ($c_{\rm s}$ is the isothermal sound speed), and we adopt $\alpha = 10^{-3}$. The actual accretion rate is limited by the minimum of Eqs.~(\ref{eq:mdot_env}) and (\ref{eq:mdot_vis}). Based on these prescriptions, we recalculate our \textit{N}-body simulations for Model~1, this time accounting for the accretion of an atmosphere after the end of core accretion phase ($t \lesssim 10^5 {\rm yr}$). \subsection{Results} We now present the results of these simulations with the set-up of Model~1 but accounting for atmosphere accretion. There are two crucial questions that we wish to address: 1) Can atmosphere accretion be substantial enough to significantly reduce the apparent bulk density of the planet and mimic low-density objects as in the Kepler-11 system? 2) Can atmosphere accretion change the masses of the planets enough to induce a late dynamical instability that can result in less compact systems. Figure~\ref{fig:t-a-env} shows the orbital evolution of each system in which the color of each planet's lines indicates the ratio of envelope mass to total mass, $M_{\rm env}/M_{\rm tot}$ (see color bar). We adopt $t_{\rm run}=10^{7} (M_{\rm core}/5 M_\oplus)^{-3}\rm\ yr$ as a fiducial value for $t_{\rm run}$ in the result of panel (a). In addition, we also perform a simulation under more efficient conditions for envelope accretion $t_{\rm run}=10^{6} (M_{\rm core}/5 M_\oplus)^{-3}\rm\ yr$, the results of which are shown in panel (b). \begin{figure} \resizebox{1.0 \hsize}{!}{\includegraphics{t_a_env_run13_1e7.eps}} \resizebox{1.0 \hsize}{!}{\includegraphics{t_a_env_run13_1e6.eps}} \caption{Evolution of the semimajor axis and the envelope mass (color bar). Panel (a) shows the result of $t_{\rm run} = 10^{7} (M_{\rm core}/5 M_\oplus)^{-3}\rm\ yr$, while panel (b) indicates that of $t_{\rm run} = 10^{6} (M_{\rm core}/5 M_\oplus)^{-3}\rm\ yr$. The filled circles connected with solid lines represent the sizes of planets. The largest circles represent a 33 Earth-mass planet in panel~(a) and 36 Earth-mass planet in panel~(b). } \label{fig:t-a-env} \end{figure} In Fig.~\ref{fig:t-a-env}(a), the planet at $a=0.1 {\rm AU}$ with $M_{\rm core} = 9.2 M_\oplus$ accretes gas from the disk and ends up retaining a thick atmosphere of $M_{\rm env} = 6.2 M_\oplus$ and $M_{\rm env}/M_{\rm tot} = 0.30$ at $t = 10 {\rm Myr}$. This planet migrates inside of the disk inner edge at $t \simeq 2.6 {\rm Myr}$ by the interaction with four outer bodies. This migration prevents the former planet from accreting more gas. The innermost planet is also moved inside the disk inner edge before $t=0.1 {\rm Myr}$ and stops accreting gas at that point. In the simulation with a shorter value of $t_{\rm run}$ shown in Fig.~\ref{fig:t-a-env}(b), a planet with $M_{\rm env} = 14 M_\oplus$ and $M_{\rm env}/M_{\rm tot} = 0.56$ eventually forms and also moves inside of the disk edge at $t \simeq 0.3 {\rm Myr}$. It is interesting to notice that as the envelope mass increases, systems become vulnerable to orbital instability. In panel (b), in fact, the system undergoes orbital instability earlier than in panel (a), and the number of final planets is also smaller. As for the first question posed in the beginning of this section, we find that in both simulations, one planet can acquire a thick H/He atmosphere from the disk, which may explain the origin of the observed low density super-Earths. Regarding the second question, we also observe that the acquisition of a massive atmosphere by one planet destabilizes closely spaced systems, leading to relatively well separated systems with fewer planets or even single-planet systems. The properties (e.g., orbital separation and bulk density) of the final systems shown in Fig.~\ref{fig:t-a-env}(a) are reminiscent of some known planetary system. For example, two super-Earths were discovered in the Kepler-36 system, where the inner planet would be a rocky planet without a thick atmosphere, and the outer one would possess a thick atmosphere. However, the typical properties of the observed close-in super-Earths are unlikely to be reproduced. This is because the systems are tightly packed near the edge before the acquisition of atmospheres (see $t=10^5$ yr in Fig.~\ref{fig:t-a-env}), so that systems separated more than 2:1 resonances, which have been observed in super-Earth systems (see Fig.~\ref{fig:p-n}(a)), can hardly be reproduced by simulations for Model~1. \citet{lee_etal14} point out that super-Earths with a mass of $10 M_\oplus$ tend to undergo runaway gas accretion, thus becoming gas giant planets. We do not observe this phenomenon in the results of Figs.~\ref{fig:t-a-env}(a) and (b), even though the core mass is high. This is because the massive planets move inside the disk inner edge, where there is no gas, and cease envelope accretion. \section{Discussion and conclusions} \label{sec:discussion} We have re-examined in situ formation of close-in super-Earths with improved simulations, in which the effects of the disk of gas are considered. The simulations are started with small embryos and planetesimals. We find that the accretion of planets is extremely rapid owing to the large amount of solid material and short orbital periods. Thus, it is not correct to ignore the effects of the gas disk for the investigation of in situ formation of close-in super-Earths. We performed ten runs of simulation for our fiducial model and found the following. 1) The orbital architecture of resultant systems is very compact near the disk inner edge. 2) The eccentricity of super-Earths is small because planets can be stable after gas depletion. If they undergo orbital instability, the eccentricity is not highly excited. 3) The masses of planets monotonically decrease when increasing the semimajor axis. These characteristics are not consistent with observed close-in super-Earths. In fact, the cumulative observed distributions of period ratios of adjacent pairs and of eccentricities are statistically different from those we produce. We have also investigated orbital evolution including the accretion process of primitive atmospheres onto the super-Earths. The results show that close-in super-Earths that formed in situ can acquire a thick H/He atmosphere in which the planets stop envelope accretion when they migrate inside the disk's inner edge. Interestingly, if type I migration is neglected (but the eccentricity damping is included), the results match the observations much better. However, no mechanism capable of suppressing type I migration over the whole inner disk has ever been found. Recent studies have shown that MRI-driven disk winds, in which gas material is blown away from the surface of the disk, can alter the density profile of the gas disk, potentially slowing down or even reversing the migration of the protoplanets (e.g., \citealt{suzuki_inutsuka09}; \citealt{suzuki_etal10}; \citealt{ogihara_etal15}). This possibility, however, requires further investigation. Unless a mechanism for a global reduction of type I migration is demonstrated, our results imply that in situ accretion of close-in super Earth is unlikely. \begin{acknowledgements} We thank John Chambers for comments that helped us improve the manuscript. We also thank Yasunori Hori and Hiroki Harakawa for helpful comments. We thank the CRIMSON team, who manages the mesocentre of the OCA, on which most simulations were performed. Numerical computations were in part conducted on the general-purpose PC farm at CfCA of NAOJ. M.O. is supported by the JSPS Postdoctoral Fellowships for Research Abroad. A.M. and T.G. were supported by ANR, project number ANR-13--13-BS05-0003-01 projet MOJO(Modeling the Origin of JOvian planets). \end{acknowledgements}
{ "timestamp": "2015-04-14T02:16:08", "yymm": "1504", "arxiv_id": "1504.03237", "language": "en", "url": "https://arxiv.org/abs/1504.03237" }
\section{Introduction} \label{sec:Intro} Let $\mathbf{x}_0 \in \mathbb{R}^{m}$ denote a sparse solution of an underdetermined system of linear equations \begin{equation} \label{USLE} \mathbf{b} = \mathbf{A} \mathbf{x} \end{equation} in which $\mathbf{b} \in \mathbb{R}^{n}$ and $\mathbf{A} \in \mathbb{R}^{n \times m}, m > n$. Suppose that $\| \mathbf{x}_0 \|_0 = k$, where $\| \mathbf{x}_0 \|_0$ designates the number of nonzero components or the $\ell_0$ norm of $\mathbf{x}_0$. Further, let $\spark(\mathbf{A})$ represent the spark of $\mathbf{A}$, defined as the minimum number of columns of $\mathbf{A}$ which are linearly dependent, and let $\delta_{2k}(\mathbf{A})$ denote the restricted isometry constant of order $2k$ for the matrix $\mathbf{A}$ \cite{Cand08}. It is well known that if $k < \spark(\mathbf{A}) / 2$ or $\delta_{2k}(\mathbf{A}) < 1$, then $\mathbf{x}_0$ is the unique sparsest solution of the above set of equations \cite{Cand08,DonoE03}. When the sparsest solution of \eqref{USLE} is sought, one needs to solve \begin{equation} \label{l0min} \min_{\mathbf{x}} \|\mathbf{x}\|_0 \quad \text{subject to} \quad \mathbf{A}\mathbf{x}=\mathbf{b} \end{equation} However, the above program is generally NP-hard \cite{Nata95} and becomes very intractable when the dimensions of the problem increase. Since finding the sparse solution of \eqref{USLE} has many applications in various fields of science and engineering (cf. \cite{CandW08} for a comprehensive list of applications), many practical alternatives for \eqref{l0min} have been proposed \cite{CandRT06,Trop04,MohiBJ09,MaleKBJR15}. If the solution obtained by these algorithms satisfies one of the above sufficient conditions, then, assuredly, this solution is the sparsest one. Now, consider the case that the solution given by an algorithm is only approximately sparse meaning that it has some dominant components, while other components are very small but not equal to zero. If the total number of nonzero components is large such that neither of the mentioned conditions hold, it is not clear whether this solution is close to the true sparse solution or not. However, intuitively, one expects that if the number of effective components is small, then the obtained solution should not be far away from the true solution. Immediately, the following questions may be raised. Is this solution still close to the unique sparse solution of $\mathbf{b} = \mathbf{A} \mathbf{x}$? Is it possible in this case to establish a bound on the error of finding $\mathbf{x}_0$ without knowing $\mathbf{x}_0$? Similar questions can be asked when there is error or noise in \eqref{USLE}. Taking the noise into account, \eqref{USLE} is updated to \begin{equation} \label{USLENoisy} \mathbf{b} = \mathbf{A} \mathbf{x} + \mathbf{e}, \end{equation} where $\mathbf{e}$ is the vector of noise or error. In this setting, to estimate $\mathbf{x}_0$ given $\mathbf{b}$ and $\mathbf{A}$, the equality constraint in \eqref{l0min} is relaxed, and the following optimization problem should be solved: \begin{equation} \label{l0minNoisy} \min_{\mathbf{x}} \|\mathbf{x}\|_0 \quad \text{subject to} \quad \| \mathbf{A}\mathbf{x} - \mathbf{b}\| \leq \epsilon, \end{equation} where $\epsilon \geq \| \mathbf{e} \|$ is some constant and $\| \cdot \|$ designates the $\ell_2$ norm. The answers to the above questions were firstly given in \cite{BabaJM11}. Let $\widetilde{\xb}$ denote the output of an algorithm to find or estimate $\mathbf{x}_0$ from \eqref{USLE} or \eqref{USLENoisy}. Particularly, \cite{BabaJM11} provides two upper bounds on the error $\| \mathbf{x}_0 - \widetilde{\xb} \|$. The first one is rather simple to compute but turns out to be loose. On the other hand, while the second bound is tight, generally, it is much more complicated to compute. Herein, in the spirit of the loose bound in \cite{BabaJM11}, we provide a better bound which is based on the same parameter of the matrix $\mathbf{A}$, but it is \emph{strictly tighter} than the loose bound in \cite{BabaJM11}. Moreover, our proposed bound is obtained in a much simpler way with a \emph{shorter} algebraic manipulation. The proposed bound is extended to the noisy setting defined in \eqref{USLENoisy}. Furthermore, these results are also generalized to the problem of low-rank matrix recovery from compressed linear measurements \cite{RechFP10}. The bounds introduced in this paper can be used in analyzing the performance of algorithms in sparse vector and low-rank matrix recovery, especially those algorithms that provide approximately sparse or low-rank solutions such as \cite{MohiBJ09} and \cite{MaleBAJ14,MaleBS14}. {\color{\RTre_Col} Other algorithms, under some conditions, can also benefit from the analysis presented in this paper. It is known that the solution obtained by some numerical solvers of basis pursuit \cite{ChenDS98}, like $\ell_1$-magic \cite{CandR05}, is not usually exactly sparse. In fact, due to limited numerical accuracy, the obtained solution has some very small nonzero entries. Our results can be used to find upper bounds on the $\ell_2$ norm of this kind of errors. Finally, when greedy algorithms \cite{Trop04} are used with an overestimated number of nonzero components of the true solution, our bound can be exploited to characterize the conditions under which the given solution is close to the true one.} However, the bounds are obtained without any assumption on the recovery algorithm, and it is possible to improve them by exploiting properties of a specific algorithm. A similar upper bound on the error of sparse recovery in the noisy case has been proposed in \cite{GribFV06}. This upper bound, however, is only applicable when the given solution has a sparsity level{\color{\RTre_Col}, the number of nonzero components,} not greater than that of the true solution, while our bounds are obtained under the opposite assumption on the sparsity level of the given solution. The rest of this paper is organized as follows. After introducing the notations used throughout the paper, in Section \ref{sec:Bounds}, we first present the upper bounds on the error of sparse vector recovery and, next, generalize them to the low-rank matrix recovery problem. Section \ref{sec:Proofs} is devoted to the proofs of the results in Section \ref{sec:Bounds}, followed by conclusions in Section \ref{sec:Con}. \emph{Notations}: For a vector $\mathbf{x}$, $\| \mathbf{x} \|, \| \mathbf{x} \|_1$, and $\| \mathbf{x} \|_0$ denote the $\ell_2$, $\ell_1$, and the so-called $\ell_0$ norms, respectively. Moreover, $\xb^{\downarrow}$ denotes a vector obtained by sorting the elements of $\mathbf{x}$ in terms of magnitude in descending order, and $x_i$ designates the $i$th component of $\mathbf{x}$. $\mathbf{x}_{I}$ represents the subvector obtained from $\mathbf{x}$ by keeping components indexed by the set $I$. A vector is called $k$-sparse if it has exactly $k$ nonzero components. For a matrix $\mathbf{A}$, $\mathbf{a}_i$ denotes the $i$th column. Additionally, $\spark(\mathbf{A})$ and $\nullS(\mathbf{A})$ designate the minimum number of columns of $\mathbf{A}$ that are linearly dependent and the null space of $\mathbf{A}$, respectively. Similar to the vectors, $\mathbf{A}_{I}$ represents the submatrix of $\mathbf{A}$ obtained by keeping those columns indexed by $I$. It is always assumed that the singular values of matrices are sorted in descending order, and $\sigma_i(\mathbf{X})$ denotes the $i$th largest singular value of $\mathbf{X}$. Let $\mathbf{X} = \sum_{i=1}^{q} \sigma_i \mathbf{u}_i \mathbf{v}_i^T$, where $q = \rank(\mathbf{X})$, denote the singular value decomposition (SVD) of $\mathbf{X}$. $\mathbf{X}_{(r)} = \sum_{i=1}^{r} \sigma_i \mathbf{u}_i \mathbf{v}_i^T$ represents a matrix obtained by keeping the $r$ first terms in the SVD of $\mathbf{X}$, and $\mathbf{X}_{(-r)} = \mathbf{X} - \mathbf{X}_{(r)}$. $\|\mathbf{X}\|_F$ denotes the Frobenius norm, and $\|\mathbf{X}\|_* \triangleq \sum_{i=1}^{q} \sigma_i(\mathbf{X})$, in which $q = \rank(\mathbf{X})$, stands for the nuclear norm. \section{Upper Bounds} \label{sec:Bounds} In this section, the upper bounds on the error of sparse vector and low-rank matrix recovery are presented. \subsection{Sparse Vector Recovery} \label{SpRec} Following the common practice in the literature of compressive sensing (CS), we refer to $\mathbf{b}, \mathbf{A}$, and $\mathbf{e}$ in \eqref{USLENoisy} as the measurement vector, sensing matrix, and noise vector, respectively. Before stating the results, we recall two definitions. \newtheorem{Def1}{Definition} \begin{Def1}[\hspace{-0.05em}\cite{Cand08}] For a matrix $\mathbf{A} \in \mathbb{R}^{n \times m}$ and all integers $k \leq m$, the restricted isometry constant (RIC) of order $k$ is the smallest constant $\delta_k(\mathbf{A})$ such that \begin{equation} \label{RIPDefInEq1} (1-\delta_{k}(\mathbf{A})) \|\mathbf{x}\|^2 \leq \|\mathbf{A} \mathbf{x}\|^2 \leq (1+\delta_{k}(\mathbf{A})) \|\mathbf{x}\|^2 \end{equation} holds for all vectors $\mathbf{x}$ with sparsity at most $k$. \end{Def1} \newtheorem{Def2}[Def1]{Definition} \begin{Def2}[\hspace{-0.05em}\cite{BabaJM11}] For a matrix $\mathbf{A} \in \mathbb{R}^{n \times m}$, let $\sigma_{\min,p}(\mathbf{A}) > 0$ for $p \leq \spark(\mathbf{A}) - 1$ be the smallest singular value of all $\binom{m}{p}$ possible $n \times p$ submatrices of $\mathbf{A}$. \end{Def2} The following theorem presents the upper bounds for both noisy and noiseless cases. We deliberately separate the noisy and noiseless cases in order to be able to provide a tighter bound in the noiseless setting. \newtheorem{Thm1}{Theorem} \begin{Thm1} \label{VecBound} Let $\mathbf{A} \in \mathbb{R}^{n \times m}$, $m > n$, denote a sensing matrix. We have the following upper bounds. \begin{itemize} \item Noiseless case: Suppose that $\mathbf{x}_0$ is a $k$-sparse solution of $\mathbf{A} \mathbf{x} = \mathbf{b}$, where $k < \spark(\mathbf{A}) / 2$. For all $\widetilde{\xb}$ solutions of $\mathbf{A} \mathbf{x} = \mathbf{b}$ satisfying $\widetilde{x}^{\downarrow}_{k+1} \leq \alpha$, \begin{equation} \label{InEqVecSigNoNoise} \| \mathbf{x}_0 - \widetilde{\xb} \|^2 \leq \Big( 1 + (m - 2k) \frac{\max_i \|\mathbf{a}_i\|^2}{\sigma_{\min,2k}^2(\mathbf{A})} \Big) (m - 2k) \alpha^2. \end{equation} \item Noisy case: Let $\mathbf{x}_0$ be any arbitrary vector with $\| \mathbf{x}_0 \|_0 = k < \spark(\mathbf{A}) / 2$, and let $\mathbf{b} = \mathbf{A} \mathbf{x}_0 + \mathbf{e}$, where $\mathbf{e}$ is noise with $\| \mathbf{e} \| \leq \epsilon$. For all $\widetilde{\xb}$ vectors satisfying $\| \mathbf{b} - \mathbf{A} \widetilde{\xb} \| \leq \Delta$ and $\widetilde{x}^{\downarrow}_{k+1} \leq \alpha$, the error $\| \mathbf{x}_0 - \widetilde{\xb} \|$ is bounded by \begin{align} \label{InEqVecSigNoisy} \| \mathbf{x}_0 - \widetilde{\xb} \| \leq & \Big( 1 + \sqrt{m - 2k}\frac{ \max_i \|\mathbf{a}_i\|}{\sigma_{\min,2k}(\mathbf{A})} \Big) \sqrt{m - 2k} \, \, \alpha \nonumber\\ & + \frac{\Delta + \epsilon}{\sigma_{\min,2k}(\mathbf{A})}. \end{align} \end{itemize} \end{Thm1} In brief, the above bounds say that if we have a solution $\widetilde{\xb}$ that consists of $k$ large components, then this vector is not far from the sparse solution provided that $\sigma_{\min,2k}(\mathbf{A})$ is not very small. In particular, the bound in \eqref{InEqVecSigNoNoise} vanishes when $\widetilde{\xb}$ is $k$-sparse, reducing to the well-known uniqueness theorem in \cite{DonoE03}. Moreover, notice that these bounds work uniformly for all sparse vectors $\xb_0$ of sparsity level $k$; that is, they are independent from the position and magnitude of nonzero component of $\xb_0$. {\color{black}\emph{Remark 1.}} The loose bounds in \cite[Theorems 2 \& 4]{BabaJM11} translated to our notations in the noiseless and noisy settings are \begin{IEEEeqnarray}{rCl} \label{BZBound1} \| \xb_0 - \widetilde{\xb} \| & \leq & \Big( 1 + \frac{1}{\sigma_{\min,2k}(\mathbf{A})} \Big) m \alpha,\\ \| \xb_0 - \widetilde{\xb} \| & \leq & \Big( 1 + \frac{1}{\sigma_{\min,2k}(\mathbf{A})} \Big) m \alpha + \frac{\Delta + \epsilon}{\sigma_{\min,2k}(\mathbf{A})}. \label{BZBound2} \end{IEEEeqnarray} The bounds in \eqref{BZBound1} and \eqref{BZBound2} are applicable only if the sensing matrix has unit $\ell_2$ norm columns, whereas Theorem \ref{VecBound} is valid without this restriction. To compare our bounds in Theorem \ref{VecBound} to \eqref{BZBound1} and \eqref{BZBound2}, let $U$ denote the square root of the upper bound in \eqref{InEqVecSigNoNoise}. Substituting $\max_i \|\mathbf{a}_i\|$ with 1 in $U$, one can write that \begin{IEEEeqnarray*}{rCl} U & = & \sqrt{\Big( 1 + \frac{m - 2k}{\sigma_{\min,2k}^2(\mathbf{A})} \Big)(m - 2k)} \, \, \alpha\\ & < & \Big(1 + \frac{\sqrt{m - 2k}}{\sigma_{\min,2k}(\mathbf{A})} \Big)\sqrt{m - 2k} \, \, \alpha = U_2\\ & = & \Big(\frac{1}{\sqrt{m - 2k}} + \frac{1}{\sigma_{\min,2k}(\mathbf{A})} \Big)(m - 2k) \alpha\\ & < & \Big( 1 + \frac{1}{\sigma_{\min,2k}(\mathbf{A})}\Big) (m - 2k) \alpha\\ & < & \Big( 1 + \frac{1}{\sigma_{\min,2k}(\mathbf{A})}\Big) m \alpha, \end{IEEEeqnarray*} where $U_2$ is the first term in the upper bound in \eqref{InEqVecSigNoisy} with $\max_i \|\mathbf{a}_i\| = 1$. The above inequalities prove that the bounds \eqref{InEqVecSigNoNoise} and \eqref{InEqVecSigNoisy} are strictly tighter than the corresponding bounds in \cite{BabaJM11} formulated in \eqref{BZBound1} and \eqref{BZBound2}. {\color{black}\emph{Remark 2.}} {\color{\RTre_Col}In general, finding $\sigma_{\min,2k}(\mathbf{A})$ is a combinatorial problem\footnote{{\color{\RTre_Col}Since one should calculate the singular values of all $\binom{m}{2k}$ possible $n \times 2k$ submatrices of $\mathbf{A}$.}} and NP-hard \cite{BabaJM11}. However, for a random matrix $\mathbf{A}$, under some conditions, the smallest singular value of all $n \times 2k$ submatrices is highly concentrated around a certain value. In particular, let $\mathbf{A}_{(2k)}$ denote any $n \times 2k$ submatrix of $\mathbf{A}$. If all the entries of $\mathbf{A}$ are independent and identically distributed (iid) from a normal distribution $\ensuremath{N(0,\frac{1}{n})}$ and $2k < n$, then for any $t > 0$, we have \cite{BabaJM11} \begin{equation*} p\Big\{ \sigma_{min}(\mathbf{A}_{(2k)}) < 1 - \sqrt{ \frac{2k}{n} } - t \Big\} \leq e^{-\frac{nt^2}{2}}, \end{equation*} where $p\{\cdot\}$ and $\sigma_{min}(\cdot)$ denote the probability of the event described in the braces and the smallest singular value, respectively. This shows that when the dimensions of $\mathbf{A}$ increase, the smallest singular value of all $n \times 2k$ submatrices is equal to or larger than $1 - \sqrt{ \frac{2k}{n} }$ with very high probability. In line with this, for any matrix with iid entries from a zero-mean, $\frac{1}{n}$-variance distribution with a finite fourth-order moment, when $n,m \to \infty$ while $\frac{2k}{n} \to c$, $\sigma_{min}(\mathbf{A}_{(2k)})$ converges to $1 - \sqrt{c}$ almost surely \cite{BaiY93}.} {\color{black}\emph{Remark 3.}} {\color{black} In addition to the above probabilistic values for $\sigma_{\min,2k}(\mathbf{A})$,} the bounds in Theorem \ref{VecBound} can be also stated in terms of $\delta_{2k}(\mathbf{A})$ instead of $\sigma_{\min,2k}(\mathbf{A})$. In fact, \begin{equation*} \sigma_{\min,2k}(\mathbf{A}) = \min_{\| \mathbf{x} \|_0 \leq 2k} \frac{\| \mathbf{A} \mathbf{x} \|}{\| \mathbf{x} \|}, \end{equation*} or $\| \mathbf{A} \mathbf{x} \|^2 \geq \sigma_{\min,2k}^2(\mathbf{A}) \| \mathbf{x} \|^2$ for all $\mathbf{x}$ with sparsity at most $2k$. Since $\delta_{2k}(\mathbf{A})$ in \eqref{RIPDefInEq1} is in such a way that both inequalities are satisfied, it can be concluded that $\sigma_{\min,2k}^2(\mathbf{A}) \geq 1 - \delta_{2k}(\mathbf{A})$. Consequently, the following bounds, {\color{\ROne_Col}under the condition $\delta_{2k}(\mathbf{A}) < 1$}, are a reformulation of the bounds in Theorem \ref{VecBound} in terms of $\delta_{2k}(\mathbf{A})$ which is frequently used in CS literature. \begin{itemize} \item Noiseless case: \begin{equation*} \| \mathbf{x}_0 - \widetilde{\xb} \|^2 \leq \Big( 1 + (m - 2k) \frac{\max_i \|\mathbf{a}_i\|^2}{1 - \delta_{2k}(\mathbf{A})} \Big) (m - 2k) \alpha^2. \end{equation*} \item Noisy case: \begin{align*} \| \mathbf{x}_0 - \widetilde{\xb} \| \leq & \Big( 1 + \sqrt{m - 2k} \frac{ \max_i \|\mathbf{a}_i\|}{\sqrt{1 - \delta_{2k}(\mathbf{A})}} \Big) \sqrt{m - 2k} \,\, \alpha \nonumber\\ & + \frac{\Delta + \epsilon}{\sqrt{1 - \delta_{2k}(\mathbf{A})}}. \end{align*} \end{itemize} \subsection{Low-rank Matrix Recovery} Recovery of a low-rank matrix from compressed linear measurements \cite{RechFP10} is the task of finding the low-rank matrix $\mathbf{X}_0 \in \Rbb^{n_1 \times n_2}$ from underdetermined measurements $\mathbf{b} = \mathcal{A}(\mathbf{X}_0)$ where $\mathbf{b} \in \mathbb{R}^{m}, \mathcal{A}:\Rbb^{n_1 \times n_2} \to \mathbb{R}^{m}$ is a linear operator, and $m < n_1 n_2$. In the presence of noise, the measurement model is changed to $\mathbf{b} = \mathcal{A}(\mathbf{X}_0) + \mathbf{e}$ where $\mathbf{e}$ is the vector of noise.\footnote{The parameters $\mathbf{b},m,\mathbf{e},$ and $n$ (to be defined later on in this subsection) should not be confused with the similar parameters defined in Subsection \ref{SpRec}.} This recovery is a generalization of sparse vector recovery introduced in Section \ref{sec:Intro} to matrix variables. Consequently, the naive approach for recovering $\mathbf{X}_0$ from either noiseless or noisy measurements is \begin{equation} \label{rankmin} \min_{\mathbf{X}} \rank(\mathbf{X}) \quad \text{subject to} \quad \| \mathcal{A}(\mathbf{X}) - \mathbf{b} \| \leq \epsilon, \end{equation} where $\epsilon$ is some constant not less than $\| \mathbf{e} \|$ in the noisy case and equal to 0 in the noiseless case. In this subsection, we present upper bounds on the error of recovering or estimating low-rank matrices from noiseless and noisy measurements when the obtained solution is approximately low-rank. Similar to the vector case, a matrix is approximately low rank, if it is composed of a few dominant singular values, while its other singular values are very small. Before stating the results, first the definition of the RIC is recalled. \newtheorem{Def3}[Def1]{Definition} \begin{Def3}[\hspace{-0.05em}\cite{CandP11}] For a linear operator $\ACal: \Rbb^{n_1 \times n_2} \rightarrow \Rbb^m$ and all integers $r \leq \min(n_1,n_2)$, the RIC of order $r$ is the smallest constant $\delta_r(\mathcal{A})$ such that \begin{equation*} (1-\delta_{r}(\mathcal{A})) \|\mathbf{X}\|_F^2 \leq \|\mathcal{A}(\mathbf{X})\|^2 \leq (1+\delta_{r}(\mathcal{A})) \|\mathbf{X} \|_F^2 \end{equation*} holds for all matrices $\mathbf{X}$ with rank at most $r$. \end{Def3} \newtheorem{Thm2}[Thm1]{Theorem} \begin{Thm2} \label{MatBound} Let $\ACal: \Rbb^{n_1 \times n_2} \rightarrow \Rbb^m, m < n_1 n_2,$ denote a linear operator, and let $n = \min(n_1,n_2)$. We have the following upper bounds. \begin{itemize} \item Noiseless case: Suppose that $\mathbf{X}_0$ is a rank $r$ solution of $\mathbf{b} = \mathcal{A}(\mathbf{X})$. If $0 < \delta_{2r}(\mathcal{A}) < 1$, then, for all $\widetilde{\Xb}$ solutions of $\mathbf{b} = \mathcal{A}(\mathbf{X})$ satisfying $\sigma_{r+1}(\widetilde{\Xb}) \leq \alpha$, \begin{equation} \label{InEqMatSigNoNoise} \| \mathbf{X}_0 - \widetilde{\Xb} \|_F^2 \leq \Big( 1 + (n - 2r) \frac{1 + \delta_1(\mathcal{A})}{1 - \delta_{2r}(\mathcal{A})} \Big) (n - 2r) \alpha^2. \end{equation} \item Noisy case: Let $\mathbf{X}_0$ be any arbitrary matrix of rank $r$, and let $\mathbf{b} = \mathcal{A}(\mathbf{X}_0) + \mathbf{e}$, where $\mathbf{e}$ is noise with $\| \mathbf{e} \| \leq \epsilon$. If $0 < \delta_{2r}(\mathcal{A}) < 1$, then for all $\widetilde{\Xb}$ estimates of $\mathbf{X}_0$ satisfying $\| \mathbf{b} - \mathcal{A}(\widetilde{\Xb}) \| \leq \Delta$ and $\sigma_{r+1}(\widetilde{\Xb}) \leq \alpha$, the error $\| \mathbf{X}_0 - \widetilde{\Xb} \|$ is bounded by \begin{align} \label{InEqMatSigNoisy} \| \mathbf{X}_0 - \widetilde{\Xb} \|_F \leq & \Bigg( 1 + \sqrt{(n - 2r)\frac{1 + \delta_1(\mathcal{A})}{1 - \delta_{2r}(\mathcal{A})}} \Bigg) \sqrt{n - 2r} \,\, \alpha \nonumber\\ & + \frac{\Delta + \epsilon}{1 - \delta_{2r}(\mathcal{A})}. \end{align} \end{itemize} \end{Thm2} \section{Proofs of Results} \label{sec:Proofs} \subsection{Proof of Theorem \ref{VecBound}} We need the following lemmas. \newtheorem{Lem1}{Lemma} \begin{Lem1} \label{VecNoNoise} Let $\mathbf{A} \in \mathbb{R}^{n \times m}$, $m > n$, be a sensing matrix. For every $\mathbf{x} \in \nullS(\mathbf{A})$ and any subset $I$ of $\{1,\cdots,m\}$ with cardinality $m - p$, where $ p \leq \spark(\mathbf{A}) -1$, we have that \begin{equation} \label{InEqVecSig} \| \mathbf{x} \|^2 \leq \Big( 1 + (m - p) \frac{\max_i \|\mathbf{a}_i\|^2}{\sigma_{\min,p}^2(\mathbf{A})} \Big) \| \xb_{I} \|^2. \end{equation} \begin{IEEEproof} First, we notice that \begin{IEEEeqnarray}{rCl} \Big \| \sum_{i \in I} x_i \mathbf{a}_i \Big \|^2 & \leq & \Big( \sum_{i \in I} \| x_i \mathbf{a}_i \| \Big)^2 = \Big( \sum_{i \in I} |x_i| \| \mathbf{a}_i \| \Big)^2,\nonumber \\ & \leq & \max_i \| \mathbf{a}_i \|^2 \Big( \sum_{i \in I}| x_{i} | \Big)^2, \nonumber \\ & = & \max_i \| \mathbf{a}_i \|^2 \| \xb_{I} \|_1^2, \nonumber \\ & \leq & (m - p) \max_i \| \mathbf{a}_i \|^2 \| \xb_{I} \|^2, \label{InEq1} \end{IEEEeqnarray} where, for the last inequality, we used $\forall \mathbf{z} \in \mathbb{R}^{l}, \| \mathbf{z} \|_1^2 \leq l \| \mathbf{z} \|^2$ \cite{HornJ90}. Next, from $\mathbf{A} \mathbf{x} = \sum_{i \in I} x_i \mathbf{a}_i + \sum_{i \notin I} x_i \mathbf{a}_i = 0$, we get \begin{equation} \label{InEq2} \Big \| \sum_{i \in I} x_i \mathbf{a}_i \Big \|^2 = \| \mathbf{A}_{\bar{I}} \xb_{\bar{I}} \|^2 \geq \sigma_{\min,p}^2(\mathbf{A}) \| \xb_{\bar{I}} \|^2, \end{equation} where $\bar{I} = \{1,\cdots,m\} \setminus I$. Combining inequalities \eqref{InEq1} and \eqref{InEq2} and using $\| \mathbf{x} \|^2 = \| \xb_{I} \|^2 + \| \xb_{\bar{I}} \|^2$ prove \eqref{InEqVecSig}. {\color{\RTre_Col}Note that $p \leq \spark(\mathbf{A}) - 1$ implies that $\sigma_{\min,p}(\mathbf{A}) \neq 0$ and inequality \eqref{InEqVecSig} is not trivial.} \end{IEEEproof} \end{Lem1} \newtheorem{Lem2}[Lem1]{Lemma} \begin{Lem2} \label{VecNoisy} Let $\mathbf{A} \in \mathbb{R}^{n \times m}$, $m > n$, be a sensing matrix. For every $\mathbf{x}$ satisfying $\| \mathbf{A} \mathbf{x} \| \leq \eta$ and every subset $I$ of $\{1,\cdots,m\}$ with cardinality $m - p$, where $ p \leq \spark(\mathbf{A}) -1$, we have that \begin{equation} \label{InEqVecNoisySig} \| \mathbf{x} \| \leq \Big( 1 + \sqrt{m - p} \frac{\max_i \|\mathbf{a}_i\|}{\sigma_{\min,p}(\mathbf{A})} \Big) \| \xb_{I} \| + \frac{\eta}{\sigma_{\min,p}(\mathbf{A})}. \end{equation} \begin{IEEEproof} Similar to the proof of Lemma \ref{VecNoNoise}, we have \begin{equation} \Big \| \sum_{i \in I} x_i \mathbf{a}_i \Big \| \leq \sqrt{m - p} \max_i \| \mathbf{a}_i \| \| \xb_{I} \|.\label{InEq4} \end{equation} Furthermore, from $\mathbf{A} \mathbf{x} = \sum_{i \in I} x_i \mathbf{a}_i + \sum_{i \notin I} x_i \mathbf{a}_i$, we get \begin{IEEEeqnarray}{rCl} \label{InEq5} \Big \| \sum_{i \in I} x_i \mathbf{a}_i \Big \| & \geq & \| \mathbf{A}_{\bar{I}} \xb_{\bar{I}} \| - \| \mathbf{A} \mathbf{x} \|,\nonumber \\ & \geq & \sigma_{\min,p}(\mathbf{A}) \| \xb_{\bar{I}} \| - \| \mathbf{A} \mathbf{x} \|, \nonumber \\ & \geq & \sigma_{\min,p}(\mathbf{A}) \| \xb_{\bar{I}} \| - \eta. \end{IEEEeqnarray} Combining inequalities \eqref{InEq4} and \eqref{InEq5} leads to \begin{equation*} \sigma_{\min,p}(\mathbf{A}) \| \xb_{\bar{I}} \| \leq \sqrt{m - p} \max_i \|\mathbf{a}_i\| \| \xb_{I} \| + \eta \end{equation*} which is equivalent to \begin{equation*} \| \xb_{I} \| + \| \xb_{\bar{I}} \| \leq \Big(1 + \sqrt{m - p}\frac{\max_i \|\mathbf{a}_i\|}{\sigma_{\min,p}(\mathbf{A})} \Big) \| \xb_{I} \| + \frac{\eta}{\sigma_{\min,p}(\mathbf{A})}. \end{equation*} The above inequality together with \begin{equation*} \| \mathbf{x} \| = \Bigg \| \begin{bmatrix} \xb_{I} \\ \xb_{\bar{I}} \end{bmatrix} \Bigg \| \leq \Bigg \| \begin{bmatrix} \xb_{I} \\ \mathbf{0} \end{bmatrix} \Bigg \| + \Bigg \| \begin{bmatrix} \mathbf{0} \\ \xb_{\bar{I}} \end{bmatrix} \Bigg \| = \| \xb_{I} \| + \| \xb_{\bar{I}} \|, \end{equation*} where $\mathbf{0}$ is a vector of zeros of appropriate length, proves \eqref{InEqVecNoisySig}. \end{IEEEproof} \end{Lem2} \begin{IEEEproof}[Proof of Theorem \ref{VecBound}] To prove \eqref{InEqVecSigNoNoise}, we first notice that because $\mathbf{x}_0$ has $k$ nonzero components and $\widetilde{x}^{\downarrow}_{k+1} \leq \alpha$, $\mathbf{x} = \mathbf{x}_0 - \widetilde{\xb}$ has at most $2k$ components with magnitude larger than $\alpha$. Alternatively, $\mathbf{x}$ possesses at least $m - 2k$ components with magnitude not greater than $\alpha$. Now, let $I$ denote a set of indexes of components of $\mathbf{x}$ with magnitude less than or equal to $\alpha$ such that $|I| = m - 2k$. It is clear that $\| \xb_{I} \|^2 \leq (m - 2k) \alpha^2$. Consequently, since $\mathbf{x} \in \nullS(\mathbf{A})$, we can apply Lemma \ref{VecNoNoise} to get \begin{IEEEeqnarray*}{rCl} \| \mathbf{x}_0 - \widetilde{\xb} \|^2 & \leq & \Big( 1 + (m - 2k) \frac{\max_i \|\mathbf{a}_i\|^2}{\sigma_{\min,2k}^2(\mathbf{A})} \Big) \| \xb_{I} \|^2, \\ & \leq & \Big( 1 + (m - 2k) \frac{\max_i \|\mathbf{a}_i\|^2}{\sigma_{\min,2k}^2(\mathbf{A})} \Big)(m - 2k) \alpha^2. \end{IEEEeqnarray*} For proving \eqref{InEqVecSigNoisy}, we start with \begin{IEEEeqnarray}{rCl} \| \mathbf{A} (\mathbf{x}_0 - \widetilde{\xb}) \| & = & \| \mathbf{b} - \mathbf{A} \widetilde{\xb} + \mathbf{A} \mathbf{x}_0 - \mathbf{b} \|,\nonumber\\ & \leq & \| \mathbf{b} - \mathbf{A} \widetilde{\xb} \| + \| \mathbf{A} \mathbf{x}_0 - \mathbf{b} \|,\nonumber\\ & \leq & \Delta + \epsilon. \label{InEqThm1_1} \end{IEEEeqnarray} Following the same reasoning as in the proof of \eqref{InEqVecSigNoNoise}, the application of Lemma \ref{VecNoisy} proves \eqref{InEqVecSigNoisy}. \end{IEEEproof} \subsection{Proof of Theorem \ref{MatBound}} \newtheorem{Lem3}[Lem1]{Lemma} \begin{Lem3} \label{MatNoNoise} Let $\ACal: \Rbb^{n_1 \times n_2} \rightarrow \Rbb^m, m < n_1 n_2,$ denote a linear operator. For every $r < n = \min(n_1,n_2)$ and every $\mathbf{X} \in \nullS(\mathcal{A})$, if $0 < \delta_{r}(\mathcal{A}) < 1$, then \begin{equation} \label{InEqLem3_0} \| \mathbf{X} \|_F^2 \leq \Big( 1 + (n - r) \frac{1 + \delta_{1}(\mathcal{A})}{1 - \delta_{r}(\mathcal{A})} \Big) \| \mathbf{X}_{(-r)} \|_F^2. \end{equation} \begin{IEEEproof} Let $\mathbf{X} = \sum_{i=1}^{n} \sigma_i \mathbf{u}_i \mathbf{v}_i^T$ denote the SVD of $\mathbf{X}$. We can write that \begin{IEEEeqnarray}{rCl} \Big \| \mathcal{A}(\mathbf{X}_{(-r)}) \Big \|^2 & = & \Big \| \mathcal{A} \Big ( \sum_{i=r+1}^n \sigma_i \mathbf{u}_i \mathbf{v}_i^T \Big) \Big\|^2,\nonumber\\ & = & \Big \| \sum_{i=r+1}^n \sigma_i \mathcal{A}( \mathbf{u}_i \mathbf{v}_i^T ) \Big \|^2,\nonumber\\ & \leq & \Big( \sum_{i=r+1}^n \sigma_i \big \| \mathcal{A}( \mathbf{u}_i \mathbf{v}_i^T ) \big \| \Big) ^2,\nonumber \\ & \overset{(a)}{\leq} & \Big( \sum_{i=r+1}^n \sigma_i \sqrt{1 + \delta_{1}(\mathcal{A})} \Big)^2,\nonumber \\ & = & \big(1 + \delta_{1}(\mathcal{A})\big) \big \| \mathbf{X}_{(-r)} \big \|_*^2,\nonumber \\ & \overset{(b)}{\leq} & (n - r) \big(1 + \delta_{1}(\mathcal{A})\big) \big \| \mathbf{X}_{(-r)} \big \|_F^2,\label{InEqLem3_1} \end{IEEEeqnarray} where (a) follows from the definition of the RIC and $\| \mathbf{u}_i \mathbf{v}_i^T \|_F = 1$ and for (b), we used the inequality $\| \mathbf{Y} \|_* \leq \sqrt{\rank(\mathbf{Y})}\| \mathbf{Y} \|_F$ \cite{HornJ90}. Additionally, $\mathcal{A}(\mathbf{X}) = \mathcal{A}(\mathbf{X}_{(r)}) + \mathcal{A}(\mathbf{X}_{(-r)}) = \mathbf{0}$ implies that \begin{equation} \label{InEqLem3_2} \big \| \mathcal{A}(\mathbf{X}_{(-r)}) \big \|^2 = \big \| \mathcal{A}(\mathbf{X}_{(r)}) \big \|^2 \geq \big(1 - \delta_{r}(\mathcal{A}) \big) \| \mathbf{X}_{(r)} \|_F^2. \end{equation} Combining \eqref{InEqLem3_1} and \eqref{InEqLem3_2} together with $\| \mathbf{X} \|_F^2 = \| \mathbf{X}_{(r)} \|_F^2 + \| \mathbf{X}_{(-r)} \|_F^2$ leads to inequality \eqref{InEqLem3_0}. \end{IEEEproof} \end{Lem3} \newtheorem{Lem4}[Lem1]{Lemma} \begin{Lem4} \label{MatNoisy} Let $\ACal: \Rbb^{n_1 \times n_2} \rightarrow \Rbb^m, m < n_1 n_2$, denote a linear operator. For every $r < n = \min(n_1,n_2)$ and every $\mathbf{X}$ satisfying $\| \mathcal{A}(\mathbf{X}) \| \leq \eta$, if $0 < \delta_{r}(\mathcal{A}) < 1$, then \begin{IEEEeqnarray}{rCl} \| \mathbf{X} \|_F & \leq & \Bigg( 1 + \sqrt{(n - r)\frac{1 + \delta_1(\mathcal{A})}{1 - \delta_r(\mathcal{A})}} \Bigg) \| \mathbf{X}_{(-r)} \|_F \nonumber\\ & & + \frac{\eta}{\sqrt{1 - \delta_r(\mathcal{A})}}. \label{InEqMatNoisy} \end{IEEEeqnarray} \begin{IEEEproof} Inequality \eqref{InEqLem3_1} holds for every $\mathbf{X}$; thus, it is possible to write \begin{equation} \label{InEqLem4_1} \| \mathcal{A}(\mathbf{X}_{(-r)}) \| \leq \sqrt{(n - r)(1 + \delta_1(\mathcal{A}))} \| \mathbf{X}_{(-r)} \|_F. \end{equation} Furthermore, applying the triangle inequality on $\mathcal{A}(\mathbf{X}_{(-r)}) = \mathcal{A}(\mathbf{X}) - \mathcal{A}(\mathbf{X}_{(r)})$, one can obtain \begin{IEEEeqnarray}{rCl} \label{InEqLem4_2} \big \| \mathcal{A}(\mathbf{X}_{(-r)}) \big \| & \geq & \big \| \mathcal{A}(\mathbf{X}_{(r)}) \big \| - \big \| \mathcal{A}(\mathbf{X}) \big \|,\nonumber \\ & \geq & \sqrt{1 - \delta_r(\mathcal{A})} \| \mathbf{X}_{(r)} \|_F - \eta. \end{IEEEeqnarray} Combining inequalities \eqref{InEqLem4_1} and \eqref{InEqLem4_2} together with $\| \mathbf{X} \|_F \leq \| \mathbf{X}_{(r)} \|_F + \| \mathbf{X}_{(-r)} \|_F$ gives inequality \eqref{InEqMatNoisy}. \end{IEEEproof} \end{Lem4} \begin{IEEEproof}[Proof of Theorem \ref{MatBound}] To prove \eqref{InEqMatSigNoNoise}, let us first define $\mathbf{X} = \mathbf{X}_0 - \widetilde{\Xb}$. According to \cite[Thmeorem 3.3.16]{HornJ91}, for any $1 \leq i,j \leq n$ and $i + j \leq n+1$, \begin{equation*} \sigma_{i+j-1}(\mathbf{X}) \leq \sigma_{i}(\mathbf{X}_0) + \sigma_{j}(\widetilde{\Xb}). \end{equation*} Substituting $i$ and $j$ with $r+1$ in the above inequality leads to \begin{equation*} \sigma_{2r+1}(\mathbf{X}) \leq \sigma_{r+1}(\mathbf{X}_0) + \sigma_{r+1}(\widetilde{\Xb}) \leq \alpha. \end{equation*} Consequently, Lemma \ref{MatNoNoise} implies that \begin{IEEEeqnarray*}{rCl} \| \mathbf{X}_0 - \widetilde{\Xb} \|_F^2 & \leq & \Big( 1 + (n - 2r) \frac{1 + \delta_1(\mathcal{A})}{1 - \delta_{2r}(\mathcal{A})} \Big) \| \mathbf{X}_{(-2r)} \|_F^2,\\ & \leq & \Big( 1 + (n - 2r) \frac{1 + \delta_1(\mathcal{A})}{1 - \delta_{2r}(\mathcal{A})} \Big) (n - 2r) \alpha^2. \end{IEEEeqnarray*} For proving \eqref{InEqMatSigNoisy}, we start with \begin{IEEEeqnarray*}{rCl} \| \mathbf{A} (\mathbf{X}_0 - \widetilde{\Xb}) \| & = & \| \mathbf{b} - \mathcal{A}( \widetilde{\Xb} ) + \mathcal{A}( \mathbf{X}_0 ) - \mathbf{b} \|,\nonumber\\ & \leq & \Delta + \epsilon. \end{IEEEeqnarray*} Following the same reasoning as in the proof of \eqref{InEqMatSigNoNoise}, the application of Lemma \ref{MatNoisy} completes the proof. \end{IEEEproof} \section{Conclusion} \label{sec:Con} In this paper, we proposed upper bounds on the error of sparse vector recovery from both noiseless or noisy measurements when the obtained solution is approximately sparse. While these bounds are based on the same parameters as in the loose bounds of \cite{BabaJM11}, they are strictly tighter. We further generalized them to the problem of low-rank matrix recovery, when the solution at hand to recover the true low-rank matrix is approximately low rank. \section{Acknowledgement} {\color{black}The authors would like to thank the anonymous reviewers for their helpful comments.} \bibliographystyle{elsarticle-num}
{ "timestamp": "2015-06-29T02:07:28", "yymm": "1504", "arxiv_id": "1504.03195", "language": "en", "url": "https://arxiv.org/abs/1504.03195" }